Design and Build of a Robotic Ping Pong...
Transcript of Design and Build of a Robotic Ping Pong...
Level IV Project Final Report
Design and Build of a Robotic Ping Pong Player
Students: Ryan Harrison (1078238), Jamie Mackenzie (1079055), Brett Morris
(1092040), Jarrad Springett (1092429)
Supervisor: Frank Wornle
i
Executive Summary
This report details the theory and design of a ping pong-playing robot. A project similar
to this was undertaken in 1988 by a team at AT&T Bell Laboratories, headed by Russell
L. Andersson. Andersson achieved his goal of designing a robot capable of playing ping
pong, and it is hoped that this project will be able to duplicate his success. This project
however is restricted by a significantly smaller budget. If possible, it is hoped that this
project will eventually be able to improve upon Andersson’s original project.
For a working robot to be produced, a number of different systems need to be designed.
These include the robotic arm itself, including actuators, a vision system for ball
detection, and finally the control system. At the same time as these systems are being
designed, a method to interface them must be developed.
The first issue considered is the arm design itself. The basic principles of robotics are
investigated and programs are written to implement them for a generic serial robot arm
design. This testing program is used to iterate through various arm designs until a final
configuration is chosen. The scale and relative link lengths are optimised to find the
smallest arm possible that meets the criteria for playing the modified ping-pong game.
Simulations are then used to select motors as actuators for the robotic arm. These
simulations take into account the inertia and required velocities of the robotic arm, and
produce the required torques. Motors are then selected based on these torques.
The physical design of the robotic arm is linked closely to the modelling and design area.
Joints are designed to have the mobility range required to play basic shots while being as
lightweight as possible. Approximate links are designed and then tested to ensure they
ii
do not collide with each other. Once a viable set of links had been decided upon they can
then be detailed with other parts, such as bearings.
The vision system uses two Firewire cameras to determine the ping pong ball location in
3D space. First the ball centroid is located in the 2D camera images. This information is
then combined to locate the ball in 3D space. Camera placement is taken into account as
well as speed. A large emphasis is placed in camera calibration in order to minimise
errors in locating the ping pong ball.
Control of the robotic arm is through the use of a microcontroller and motor control
circuits. The microcontroller takes in the required motor torques and adapts this
information into the required control voltage which it then sends to the motor control
circuits. These circuits take this control voltage and output a current to the motors
relative to this control voltage.
All the sections of the project interface with each other through various methods. The
vision system sends data to the control system regarding the position of the ball in 3D
space. The control system then calculates where the arm needs to be and sends the
required motor torques to the microcontroller. The microcontroller actualises these
values, and returns the new positions of the arm. Finally the robotic arm moves into a
position which will enable it to strike the ball.
iii
Disclaimer
We declare that all material in this assignment is our own work except where there is
clear acknowledgement and reference to the work of others. We have read the University
Policy Statement on Plagiarism, Collusion and Related forms of Cheating.
Brett Morris................................................................................................................
Jarrad Springett ..........................................................................................................
Jamie Mackenzie........................................................................................................
Ryan Harrison ............................................................................................................
iv
Acknowledgements
We would like to thank our project supervisor, Frank Wornle, for supporting the Ping
Pong Robot.
We thank the Adelaide University workshop for their contribution to the Ping Pong
Project.
Brett Morris was the author of the Model Development, Simulation and Testing and
Control sections. Jamie Mackenzie was the author of Mechanical Design section. Brett
Morris and Jamie Mackenzie dual-authored the Arm Development section. Jarrad
Springett was the author of the Vision System section. Ryan Harrison authored the
Control Hardware and Communication section. All group members contributed to the
editing of the report.
v
Contents
Executive Summary ......................................................................... i
Contents .......................................................................................... v
List of Figures ................................................................................ ix
List of Tables .................................................................................. xi
1 Introduction ............................................................................... 1
1.1 Problem Definition................................................................................1
1.2 Aim and Significance ...........................................................................2
2 Literature Review ...................................................................... 3
2.1 Robotic Arms ........................................................................................3
2.2 Robotics Fundamentals and Control ..................................................4
2.3 Vision Systems .....................................................................................5
3 Model Development .................................................................. 7
3.1 Kinematic definitions ...........................................................................7 3.1.1 Position and Orientation of rigid bodies...................................................................8 3.1.2 Rotation Convention ...............................................................................................9 3.1.3 Coordinate Frames and Transforms .....................................................................10 3.1.4 Denavit-Hartenberg Convention............................................................................12 3.1.5 D-H parameters....................................................................................................13 3.1.6 Assignment of Coordinates to link.........................................................................14 3.1.7 Calculating Transforms.........................................................................................14
3.2 Inverse Kinematics.............................................................................15 3.2.1 Inverse Kinematic Methods...................................................................................16
3.3 Jacobians ............................................................................................19 3.3.1 Jacobian Input Parameters ...................................................................................20
3.4 Singularities ........................................................................................23 3.4.1 Boundary singularities ..........................................................................................23 3.4.2 Internal singularities..............................................................................................24
3.5 Trajectory Generation ........................................................................24
3.6 Dynamics.............................................................................................26 3.6.1 Methods for Solving Robot Dynamics ...................................................................27 3.6.2 Newton-Euler implementation...............................................................................28
4 Simulation and Testing........................................................... 36
4.1 Virtual Reality Models ........................................................................37 4.1.1 Graphical User Interface.......................................................................................40 4.1.2 Manual Control .....................................................................................................40
vi
4.1.3 Programmed shots ...............................................................................................41
4.2 Results of robot simulation ...............................................................42
5 Control System........................................................................ 46
5.1 Control Simulation..............................................................................49 5.1.1 Plant.....................................................................................................................49 5.1.2 Control Simulation Results....................................................................................51
5.2 Real time implementation of control System...................................54
5.3 Description of Main Control Loop.....................................................55
5.4 Control Program GUI..........................................................................57
5.5 Viability of real-time Implementation ................................................59
5.6 Results of Implemented Control System..........................................59
6 Arm Development ................................................................... 62
6.1 Requirements......................................................................................62
6.2 Research .............................................................................................63
6.3 Design Description.............................................................................64
6.4 Final Design Parameters....................................................................66 6.4.1 DH parameters.....................................................................................................66 6.4.2 Link Lengths.........................................................................................................67 6.4.3 Reach Capabilities of the manipulator...................................................................69
7 Mechanical Design.................................................................. 70
7.1 Test Rig ...............................................................................................70 7.1.1 Design..................................................................................................................70 7.1.2 Results .................................................................................................................72
7.2 Arm Design .........................................................................................72 7.2.1 Link 6 ...................................................................................................................73 7.2.2 Link 5 ...................................................................................................................74 7.2.3 Link 4 ...................................................................................................................74 7.2.4 Link 3 ...................................................................................................................75 7.2.5 Link 2 ...................................................................................................................76 7.2.6 Link 1 ...................................................................................................................76 7.2.7 Total Assembly.....................................................................................................78 7.2.8 Motor Selection ....................................................................................................79 7.2.9 Bearings...............................................................................................................80 7.2.10 Pulleys and Belts..............................................................................................81 7.2.11 Couplings.........................................................................................................82
8 Vision System.......................................................................... 84
8.1 Overview..............................................................................................84
8.2 Performance Measurement................................................................85
8.3 Locating the Ping Pong Ball in 2D Space.........................................85 8.3.1 Error Reduction and Edge Detection Techniques..................................................86 8.3.2 Colour-based Binary Image Generation ................................................................88
vii
8.3.3 Alternatives to Colour-based Binary Image Generation .........................................89 8.3.4 Colour Calibration.................................................................................................90 8.3.5 Timing ..................................................................................................................91 8.3.6 Locating the Ping Pong Ball in 2D using Centroid Identification.............................92 8.3.7 Locating the Ping Pong Ball in 2D using Trail Identification ...................................92
8.4 Locating the Ping Pong Ball in 3D Space (Depth Perception)........93 8.4.1 Stereoscopic depth perception..............................................................................93
8.5 Camera Placement .............................................................................94 8.5.1 Unused Field Of View...........................................................................................95 8.5.2 Far-Field Resolution .............................................................................................95 8.5.3 Accuracy Requirements........................................................................................96
8.6 Camera Calibration.............................................................................97 8.6.1 2D Camera Calibration .........................................................................................97 8.6.2 3D Camera Calibration .........................................................................................98 8.6.3 Adaptive Error Minimisation ................................................................................103
8.7 Hardware ...........................................................................................105
8.8 2D Camera Calibration Experiment.................................................106 8.8.1 Procedure ..........................................................................................................106 8.8.2 Camera Vector Field Results ..............................................................................108 8.8.3 Discussion..........................................................................................................110
8.9 Conclusion ........................................................................................112
9 Control Hardware .................................................................. 113
9.1 Microcontroller .................................................................................113
9.2 Output................................................................................................114
9.3 Input...................................................................................................117
9.4 Software ............................................................................................119 9.4.1 Encoder Reading................................................................................................119 9.4.2 Motor Update .....................................................................................................122 9.4.3 Serial Communication ........................................................................................124 9.4.4 Software Behaviour ............................................................................................124
10 Communication .................................................................. 128
11 Cost Analysis ..................................................................... 130
12 Conclusion ......................................................................... 132
13 Appendices......................................................................... 134
Appendix A - Official Rules of ‘Robat’ .......................................................134
Appendix B - Project Proposal (excerpt)...................................................138
Appendix C - Test Rig Drawings ................................................................139
Appendix D - Main Robot Class for Control Program ..............................146
Appendix E – Vision System Code ............................................................181
Appendix F – Microcontroller source code (excerpt)...............................234
viii
Appendix G – Documentation of Main Robot Class.................................265
Appendix H – Connection guide ................................................................269
14 References.......................................................................... 273
ix
List of Figures
Figure 2-1: PUMA arm used by Andersson (1988)............................................................ 4
Figure 3-1: Position and orientation representation (Adapted from Siciliano)................... 8
Figure 3-2: X-Y-Z fixed angles (Craig 2005)................................................................... 10
Figure 3-3: DH convention ............................................................................................... 13
Figure 3-4: Parameters for Newton-Euler formulation (Siciliano 1999).......................... 29
Figure 4-1: Output from modified toolbox ....................................................................... 37
Figure 4-2: VR model (arm shown in ready position)...................................................... 38
Figure 4-3: VR model of physical arm arm shown in ready position) ............................. 39
Figure 4-4: GUI driver for VR model............................................................................... 40
Figure 4-5: Simulink connection to VR model................................................................. 42
Figure 4-6: Torque Plots ................................................................................................... 43
Figure 5-1: Torque Control Scheme ................................................................................. 48
Figure 5-2: Simulink simulation of control system .......................................................... 49
Figure 5-3: Manipulator Plant........................................................................................... 50
Figure 5-4: Simulation Results for Joint Position OpenLoop ( plant error 30%)............. 52
Figure 5-5: Simulation Results for Joint Position Closed Loop ( plant error 30%) ......... 53
Figure 5-6: State diagram of control system..................................................................... 54
Figure 5-7: GUI for the control program .......................................................................... 57
Figure 5-8: Start-up Position............................................................................................. 58
Figure 6-1: Five different arm designs.............................................................................. 63
Figure 6-2: Representation of DH parameters .................................................................. 67
Figure 6-3: Reach capabilities of arm............................................................................... 69
Figure 7-1: Test rig (Front on) .......................................................................................... 71
Figure 7-2: Test rig (Isometric)......................................................................................... 72
Figure 7-4: Link 5 ............................................................................................................. 74
Figure 7-5: Link 4 ............................................................................................................. 75
Figure 7-6: Link 3 ............................................................................................................. 75
Figure 7-7: Link 2 ............................................................................................................. 76
Figure 7-8: Link 1 ............................................................................................................. 77
Figure 7-9: Total Assembly .............................................................................................. 78
Figure 7-10: Photo of the pully and belt ........................................................................... 82
Figure 7-11: Photo of the coupling ................................................................................... 83
Figure 8-1: Ping Pong Ball Image after 2D image processing (blurring and contrast
reduction) .................................................................................................................. 88
Figure 8-2: Binary image of a ping pong ball................................................................... 89
Figure 8-3: Edge enhanced image of chair, hand, ball and overhead projector................ 90
Figure 8-4: Identifying the colour-space of the ping pong ball ........................................ 91
Figure 8-5: Centroid of a binary image of a ping pong ball ............................................. 92
Figure 8-6: stereoscopic rays providing a 3D ball location .............................................. 94
Figure 8-7: Stereoscopic depth error dx/dθ, with respect to target distance r and camera
spacing y.................................................................................................................... 96
Figure 8-8. 3D control point identification ....................................................................... 99
x
Figure 8-9: Singular configurations of correlated lines caused by linear or closely placed
points....................................................................................................................... 100
Figure 8-10. 3D calibration control points in virtual space (front view) ........................ 101
Figure 8-11. 3D calibration control points in virtual space (top view)........................... 101
Figure 8-12. 3D calibration results ................................................................................. 102
Figure 8-13: Minimization of distance between correlated lines.................................... 103
Figure 8-14: Alignment test pattern prior to calibration................................................. 107
Figure 8-15: Camera calibration raw FOV results.......................................................... 107
Figure 8-16: Camera field of view vector field .............................................................. 108
Figure 8-17: Typical midrange per-pixel error before smoothing .................................. 109
Figure 8-18: Wide viewing range 2D camera calibration results ................................... 110
Figure 8-19: Edge saturation along right side of screen ................................................. 111
Figure 8-20: 2D Camera calibration edge errors caused by edge saturation .................. 111
Figure 9-1: Wytec Dragon 12 Development Board........................................................ 113
Figure 9-2: The LMD18245 Full-Bridge Amplifier ....................................................... 115
Figure 9-3: Simple low-pass filter .................................................................................. 115
Figure 9-4: Current limits for given D input, for a loaded motor and max DAC Ref .... 116
Figure 9-5: The HRPG Series Encoder........................................................................... 117
Figure 9-6: Possible encoder output on encoder rotation in the clockwise direction ..... 120
Figure 9-7: Servo motor signal behaviour ...................................................................... 123
Figure 9-8: Simplified state diagram of the microcontroller software ........................... 125
Figure 9-9: Non-maskable interrupts .............................................................................. 125
Figure 9-10: Maskable interrupts.................................................................................... 126
xi
List of Tables
Table 3-1: Comparison of dynamics formulations ........................................................... 27
Table 3-2: Formulations applied to a six degree of freedom arm..................................... 28
Table 4-1: Data set for shot time of 0.5 seconds............................................................... 45
Table 6-1: DH Parameters ................................................................................................ 66
Table 6-2: D-H parameters, including finalized lengths................................................... 68
Table 7-1: Applicable characteristics of the chosen motors ............................................. 80
Table 8-1: 2D Image processing functions ....................................................................... 87
Table 9-1: Encoder test results........................................................................................ 119
Table 10-1: Data packet sent from vision system to control system .............................. 128
Table 10-2: Data packet from control system to microcontroller................................... 128
Table 10-3: Data packet sent from microcontroller to control system ........................... 129
Table 11-1: Cost Analysis............................................................................................... 131
Page 1 of 274
1 Introduction
The game of table tennis is an exciting, fast paced game enjoyed by many. One of the
main attractions of table tennis is how easy it is to learn the basic rules, and play a match.
When this is applied to a robotic situation it becomes anything but easy. The issues
associated with tracking a fast moving object, and in addition to this, controlling a robot
so that it makes contact with this object, are enormous.
These issues were overcome in the late 80’s by a team at AT&T Bell Laboratories who
successfully created a robotic system to play a modified version of table tennis.
This project will attempt to emulate the success of the team at AT&T Bell Laboratories
and overcome the issues associated with creating a robotic system to play table tennis.
1.1 Problem Definition
The table tennis robot built by the team at AT&T Bell Laboratories was designed to meet
the specifications outlined in a set of robot table tennis rules called ‘robat’ proposed by
John Billingsley. These rules (Appendix A - Official Rules of ‘Robat’) are designed so
that two robot table tennis players can compete against each other.
It is these design rules and specifications which this project will be adhering to as much
as possible. Small changes will be made due to the absence of a robotic opponent and
budget/time considerations.
Page 2 of 274
1.2 Aim and Significance
The aim of this project is to complete the design and construction of a table tennis
playing robot and achieve all objectives outlined in the project proposal.
The significance of this project is not immediately apparent as there is no critical need for
a robot which plays table tennis. The design and development however does pose many
problems which must be overcome in many other robotic applications. For example, the
control system for the robotic arm can be applied to any other industry which uses a
robotic arm, as the control system is likely to be robust. The vision system itself is quite
versatile and may be used for a wide variety of applications.
Page 3 of 274
2 Literature Review
Due to the nature of this project a traditional literature review is not possible. The text
upon which this project is based, a Robot Ping-Pong Player by Andersson (1988), was
used as the starting point for the literature review. In addition to this, a large amount of
research was conducted prior to the start of the project. Texts and journals on past
attempts at a robotic table tennis player could not be found, and thus sources of literature
on specific sections of the project were sought. The sections which were researched were
robotic arms, robotic fundamentals, vision systems, and control systems.
2.1 Robotic Arms
The arm used by Andersson (1988) and his team at the AT&T Bell Laboratories was the
PUMA 260 arm, which is a 6 degree of freedom arm produced by Unimation. Because
this project had a relatively small budget, the purchase of an expensive 6 degree of
freedom arm was not an option. As a result, an arm needed to be built and so literature
on different arm configurations was found.
Page 4 of 274
Figure 2-1: PUMA arm used by Andersson (1988)
Colson and Perreira (1983) discussed different arrangements that are commonly used in
industrial robots. These were studied to gain an understanding of how arm links
interacted and how common problems were overcome. Keramas (1999) and Siciliano
(2004) were then used to review common arm configurations with their workspaces,
advantages, and disadvantages.
2.2 Robotics Fundamentals and Control
Robot Technology fundamentals gave an overall basic description of the process without
going into any specific detail about the mathematics involved. These steps included
kinematics, dynamics, and trajectory generation.
Kinematics definitions were consistent across Siciliano (1999), Lee (1982), Mayer (1981)
and Paul (1991). Only Craig (2005) applied a new modified system, which while easier to
Page 5 of 274
understand, has no supporting text, so this method was abandoned. The remaining texts
all used the standard system and defined the mathematics in the same way. Lee (1983)
was the most user-friendly and most of the syntax was taken from his paper.
Robot dynamics varies extensively across different texts. Most of the older texts such as
Paul (1981) and Lee (1983) calculate dynamics using the Lagrangian dynamics variation.
Hollerbach (1980) compares the Lagrangian to a newer alternative in Newton-Euler.
Hollerbach (1980) shows that the Newton-Euler is significantly more efficient then the
Lagrangian and should be used for real time applications. Hollerbach (1980) states the
Newton-Euler equations, yet does not give enough information in order to implement
them. Corke (2001) states the equations in a user-friendly format; however, the method
explained in Siciliano (1999) takes into account a larger number of physical factors and
hence gives a more realistic answer.
A variety of traditional robot control system were investigated including PID,
decentralized feed forward computation and computed torque control. These were
investigated in both joint and centralized control. Siciliano (1999) and Khalil (2002)
provide the most in depth descriptions of the more recent control schemes such as
computed torque. Papers such as Paul (1991) and Mayer (1981) give information about
less advance control systems and how they are compensated to meet the requirements of
controlling robotic manipulators.
2.3 Vision Systems
Robot vision literature is concerned with the processing of 2D images into useful
information. Various methods of processing 2D images to improve the detection of
features and reduce error are discussed. Methods for detecting features and objects in
images are also discussed in robot vision literature. Parallel processing and methods to
improve the computational feasibility of vision systems are discussed. Calibration of
cameras is also an important aspect of vision systems.
Page 6 of 274
Error reduction and 2D image processing involves generating pixel operations on 2D
image data that improve the results of the subsequent feature recognition phases of a
vision system. Klaus (1986) discusses a variety of image processing techniques and the
mechanisms for image formation. Sood (1992) also discusses error minimisation. Parker
(1997) addresses methods for reducing error caused by structured noise, motion, and
illumination.
A variety of image interpretation algorithms are discussed in robot vision literature. Edge
detection is an essential part of most vision systems and Parker, Sood, Klaus and Vernon
all discuss methods for edge detection. Klaus (1986) discusses pattern classification by
the matching centroids and moments of inertia of different objects. Binary images are
also discussed by Klaus, where various methods for analysing binary images to identify
parts and their orientation are presented. Skeletonisation is the process of reducing
binary image areas into lines for extracting useful information and is discussed in Klaus
(1986) and Parker (1997). Skeletonisation could be a useful technique for analysis of
ping pong trails to provide multiple data points.
Klaus (1986) and Sood (1992) discuss parallel processing techniques. However, more
recent literature has not required parallel processing to make computer vision feasible due
to significant increases in computing capabilities since 1992.
Camera calibration is critical for any 3D vision system. Vernon (2000) discusses various
camera calibration techniques. Gruen (2001) examines calibration techniques including
manual and self-calibration.
Page 7 of 274
3 Model Development
An important difference between this project and the project undertaken by Andersson et
alia is that the 1988 project used a pre-purchased industrial PUMA robot. Due to the
large expense associated with purchasing an industrial robot, a working robotic arm (or
manipulator) has to be developed. The basic principles of and methods for manipulator
design have been well established for over 20 years, and all of these have to be
investigated and implemented before testing and the construction of the arm can begin.
The following section defines the method for coordinating relationships between arm
links. These mathematical relations can then be used in inverse kinematics, which is used
to determine the desired joint positions. Jacobians are used to determine joint velocities
at the joint positions which are needed to hit the ball. The boundary conditions of a
played shot in ping-pong can be defined by the final joint positions and velocities, using
this information. Shot trajectories are then investigated. The final component required to
implement a model of a robot manipulator is described in the robot dynamics section,
which is used to determine the moments and forces the arm is subjected to as it carries
out a shot defined by the trajectory.
3.1 Kinematic definitions
This section deals with the kinematic relationships of the manipulator. Kinematics is a
study of the spatial configurations of the robot arm as a function of time, without regard
to forces/moments that cause that motion (Lee 1982). A robotic manipulator can be
mathematically modelled using a kinematic chain of rigid bodies or links. These links
can be connected using either prismatic or revolute joints. The resulting motion of the
arm is a combination of the movements of all links in the chain. The first step in
modelling a manipulator is to define a system which relates each link to that of its
neighbour. The main focus has been to develop an equation describing the position and
Page 8 of 274
orientation of the end-effector, the bat, within a global coordinate system with origin at
the base of the robot.
3.1.1 Position and Orientation of rigid bodies
In order to perform a kinematic analysis on a manipulator, each link must be able to be
completely defined in space both in position and orientation. To do this, a body-attached
coordinate frame is allocated to each link that rotates/translates with respect to that link
(Paul 1981). Once a coordinate system is established, any one coordinate frame can be
related to another by both a position vector and a rotation matrix. The following notation
for describing position and rotation relations between coordinate systems is taken from
Craig (2005).
Figure 3-1: Position and orientation representation (Adapted from Siciliano)
=
z
y
x
A
B
P
P
P
P (3:1)
AZ
BZ
AY
BY
BXAX
AY
AZ
BZ
BY
BXPAB
Page 9 of 274
Equation (3.1) gives a position vector of the frame B in space relative to the Ath
coordinate frame.
The rotation matrix from frame A to frame B which describes the orientation of
coordinate frame B in respect to A is shown below.
⋅⋅⋅
⋅⋅⋅
⋅⋅⋅
==
ABABAB
ABABAB
ABABAB
B
A
B
A
B
AA
B
ZZZYZX
YZYYYX
XZXYXX
ZYXR
ˆˆˆˆˆˆ
ˆˆˆˆˆˆ
ˆˆˆˆˆˆ
]ˆˆˆ[ (3.2)
where the columns in RAB are the unit vectors of frame B projected onto the unit vectors
of frame A. Using the rotational relationship between frame A and B a coordinate in
frame B can be expressed in frame A by equation 3.3
pRp BA
B
A = (3.3)
The inverse of RAB gives the orientation of coordinate system A in terms of coordinate
system B hence:
pRp AB
A
B = where TA
B
A
B
B
A RRR == −1 (3.4)
3.1.2 Rotation Convention
Rotation transforms can be specified as three consecutive rotations about the coordinate
axes. The order that these rotations are performed affects the form of the result. There
are three main rotation conventions used in robotics, Euler angles, Z-Y-Z angles and X-
Y-Z fixed angles (Paul 1981). In this report, rotations are represented by X-Y-Z fixed
angles in which an x-, y- and then z-rotation is performed using the right hand rule about
the original axis (Paul 1981).
Page 10 of 274
As shown in Figure 3-2, with AAA ZYX ˆ,ˆ,ˆ being the original axis:
Figure 3-2: X-Y-Z fixed angles (Craig 2005)
3.1.3 Coordinate Frames and Transforms
A frame is a coordinate system that is rigidly attached to a link. Subsequent frames are
related by a rotation and position. Frames are represented by a homogenous transform
matrix T which can be considered to consist of four submatrices (Lee 1983).
=
= ×
×
×
Factor
Scaling
Vector
Position
Transform
ePerspectiv
Matrix
Rotation
p
f
RT
1
13
31
33 (3.5)
The upper left hand corner of T contains the rotation matrix and the upper right hand side
contains the position vector both as described above. The perspective transform is set to
'ˆBZ
AZ
'ˆBY
AY
'ˆBX
AX
AZ AZ
AX AX
AYAY
BX ′′ˆ
BY ′′ˆ
BZ ′′′ˆBY ′′′ˆ
BX ′′′ˆ
γ
β
αBZ ′′ˆ
Page 11 of 274
[0 0 0] for robotics applications. Also, the scaling factor is set to one for this application;
non-unity scaling factors are used for graphics purposes which utilize the same transform
matrix (Lee 1982).
Using the coordinate transform TAB matrix, a position in space with respect to frame B
can be represented in coordinate frame A using equation 3.6.
pTp BA
B
A = (3.6)
The homogenous transform matrix is often expressed in the following form for use in
other kinematic functions. This convention was adopted from Lee (1982).
=
=1000
1000
pasn
pasn
pasn
pasn
Tzzzz
yyyy
xxxx
(3.7)
Just as for the rotation matrix, the inverse of the transform gives the opposite relationship
to the original transform:
pTp AB
A
B = where 1−= TT A
B
B
A (3.8)
Due to the structure of the transform matrix the inverse is easily implemented as shown
in equation 3.9.
Page 12 of 274
−
−
−
=−
1000
1
paaaa
pssss
pnnnn
Tt
zyx
t
zyx
t
zyx
(3.9)
3.1.4 Denavit-Hartenberg Convention
In order to perform a kinematic analysis on a manipulator, a coordinate system needs to
be placed on each link in the chain. From this, a homogenous transform matrix can be
produced. Coordinate systems can be attached to links arbitrarily, but a consistent
convention for assigning coordinate systems to links is desired. Denavit and Hartenberg
proposed a standardised system. This system, now known as the Denavit-Hartenberg
system, uses four separate parameters to completely define each coordinate frame with
respect to the previous frame (Piper 1968). There are two different variations of the
system, the standard and the modified system. For the purpose of this project only the
standard system is used.
Page 13 of 274
3.1.5 D-H parameters
Figure 3-3 shows the four parameters used to describe joints coordinates defined by the
DH convention.
Figure 3-3: DH convention
iθ is the joint angle from the 1−ix axis to the ix axis about the 1−iz axis.
id is the distance from the origin of the (i-1)th coordinate frame to the intersection of the
1−iz axis with the ix axis along the 1−iz axis
ia is the offset distance from the intersection of the 1−iz axis with the ix axis to the
origin of the ith frame along the ix axis (or shortest distance between the 1−iz and the iz
axes)
iα is the offset angle from the 1−iz axis to the iz axis about the ix axis (using the right-
hand rule)
This set of parameters, taken from Lee (1983), can be used to define the relationship
between any set of axes. For each joint, three of the above parameters stay constant. The
final parameter changes in value to indicate joint movement and is termed the joint
variable. Note that for rotary joints, iθ is the joint variable, and for prismatic joints either
Page 14 of 274
id or ia can be made the joint variable depending on the direction of translation, which
can be along the joint iz or ix axis.
3.1.6 Assignment of Coordinates to link
The four DH parameters do not represent a unique set of coordinate systems. It is
possible to obtain the same DH parameters from different sets of coordinate axes. To
avoid confusion, a standard convention for assigning coordinates to manipulators is used
(Siciliano 1999). Using this convention, future readers should be able to produce the
same set of coordinate axis from the D-H parameters in this report.
Axis i denotes the axis of the joint connecting link i-1 to link i.
• Axis zi is selected to be along the axis of joint i+1.
• The origin Oi is located at the intersection of axis zi with the common normal to
axes zi-1 and zi.
• Axis xi is chosen along the common normal to axes zi-1 and zi with direction from
joint i to joint i+1.
• Axis yi is chosen to complete the right-handed frame.
3.1.7 Calculating Transforms
Each of the four DH parameters can be expressed as a basic homogeneous transformation
that is either a single translation or rotation. The total transform is found by the product
of those four parameters as shown in equation 3.10 (Piper 1968). For application to a
serial manipulator, transformations between adjacent links are found and denoted by
Aii 1− . These single transforms can then be multiplied with other adjacent transforms to
Page 15 of 274
determine the relationships between non-successive links, such as the end-effector to the
base.
αθ ,,,,
1
xaxzdz
i
i TTTTA ⋅⋅⋅=−
−⋅
⋅
−
⋅
=
1000
00
00
0001
1000
0100
0010
001
1000
0100
00
00
000
0100
0010
0001
ii
ii
i
ii
ii
cs
sc
a
cs
sc
d
ααααθθ
θθ
−
−
=
1000
0 iii
iiiiiii
iiiiiii
dcs
sacsccs
cassscc
ααθθαθαθθθαθαθ
(3.10)
In order to obtain a transform that relates the base coordinate frame to any other frame in
the chain, the multiplication shown in equation 3.11 is applied (adapted from Lee 1983).
∏=
−− =⋅=i
j
j
j
i
ii AAAAT1
111
2
0
1
0 ........ (3.11)
3.2 Inverse Kinematics
Once the relationships between coordinates of all links have been established, the inverse
kinematics problem needs to be addressed. Inverse kinematics involves the calculation of
the joint variables (θ for revolute joints) which correspond to a given end-effector
position and orientation (Siciliano 1999).
Page 16 of 274
There are a number of problems associated with finding the joint angles to produce a
valid shot. The main problem is that it is not always possible for the manipulator to
produce the desired shot. The arm design chosen has six degrees of freedom, three for
position and three for orientation, so the problem is greatly simplified, as it can achieve
any position and orientation within its workspace. The problem then becomes restricting
movements of the end-effector to within its dexterous workspace. The dexterous
workspace is the area in which the arm can utilise its six degrees of freedom.
For any given end-effector position there can exist multiple joint solutions. The number
of variations depends on the number of links. For a six degree of freedom arm acting
entirely within it dexterous workspace and away from any singularities, sixteen different
solutions exist. The problem associated with multiple solutions is that a method of
selecting a specific one must exist. Some of the solutions can be instantly eliminated if
they produce a collision between the various links of the arm, but a heuristic needs to be
applied for the remaining ones.
3.2.1 Inverse Kinematic Methods
Three main methods are proposed to obtain the inverse kinematics solution, closed form,
geometric and numeric solutions (Lee 1982). Closed form (algebraic form) and
geometric inverse kinematics are significantly more efficient than numeric solutions.
Geometric form inverse kinematics solutions become complicated for more than three
degrees of freedom. Therefore, closed form inverse kinematics is used in the final
design.
The algebraic solution involves solving the following equation, where Tn0 is describes the
position and orientation of the end-effector relative to the base frame.
Page 17 of 274
=
=1000
1000
0pasn
pasn
pasn
pasn
Tzzzz
yyyy
xxxx
n (3.12)
This method often does not yield a solution. Closed form solutions only exist for simple
kinematic configurations or structures containing a number of zero lengths and angles
(α ) of 0 or 2
π. For a six degree of freedom arm, a solution is guaranteed if three
consecutive coordinate axes are coincident through the application of Piper’s solution
(Craig 2005). It is for this reason that there are a limited number of industrial robot
configurations: they are all designed to have a closed form solution.
For the current arm design, the algebraic inverse kinematics solution involves solving the
following set of equations which were generated from the coordinate transform from the
end effector to the base link:
)()( 24253263156415632321456 csssccsscscssccssccccccnx −+++−=
41632146 csssscss ++ (3.13)
32146321321655414321321 (()())((( ccssscssscscscsccsssccsny −+++−−=
)) 41321 ccsss −− (3.14)
643232632325543232 )())()(( sssccscccsssccsccsnz +++−+−−= (3.15)
53213215414321321 )())(( ccscsccssscssccccs x +−+−= (3.16)
)())(( 32132155414321321 cssscscssccsssccss y +−−−= (3.17)
53232543232 )()( cccssscsscss z +−−−−= (3.18)
Page 18 of 274
321553213215414321321 (())())((( cccsscscscccsscssccccax −−+++−=
6414321 )) ccssssc +− (3.19)
4321321653213214414321321 )(())())((( ssssccssscssscscsccsssccsa y −−−++−−=
641 )ccc− (3.20)
643232653232543232 )())()(( cssccsssccssccsccsaz −−++−+−−=
114321321653213215414321321 )())())((( cadcscsccdccsssccssscssccccpx +−−++−+−=
22133213321 cacsasccacc +−+ (3.21)
4321321653213215414321321 )())())((( dcssscsdccssscsssccsssccsp y −−++−−−=
1122133213321 sacassasscacs ++−+ (3.22)
33243232653232543232 )())()(( casdccssdcccssscsccspz −−++−−−−=
122332 dsasac +−− (3.23)
This solution will have to be completed for 61 θθ → in order for the robot to make real
time trajectory changes. For the purposes of this project, the Matlab robotics toolbox
written by Peter I. Corke has been used to solve the inverse kinematics problem. The
toolbox uses a numeric solution and, if multiple solutions are present, it selects the
solution that has the minimum amount of movement from its previous configuration.
The inverse kinematic results gained from the robotic toolbox have been stored in a
lookup table in order to decrease computation time for the real time control application,
described later. In later stages of development the above equations can be solved to gain
an online kinematic solution. Alternatively the final arm design can be approximated by
an arm containing 3 coincident coordinate systems at the wrist. This approximation
would allow the application of Piper’s solution.
Page 19 of 274
3.3 Jacobians
In order to play a specific shot, the position of each link of the arm needs to be known so
that the end-effector has the correct position and orientation. However, in order to play a
shot with specified velocity, each joint velocity at the specified joint positions needs to be
found. This is accomplished using Jacobians, which are used to relate joint velocities to
the linear and angular velocities of the end-effector (Siciliano 1999). The relationships
between joint velocities θɺ , and the linear and angular velocities, pɺ and ω respectively,
are shown below
θθ ɺɺ )(pJp = (3.24)
θθω ωɺ)(J= (3.25)
where
=
nθ
θθ ⋮
1
Note: These relationships are only valid at a particular instant. The moment q is changed
the Jacobian has to be recalculated.
Equations 3.24 and 3.25 are combined to form J which relates both linear and angular
velocity:
θθυ ɺ)(J= (3.26)
where )(θJ is in the form:
=
wnw
PnP
JJ
JJ
J
1
1
... (3.27)
Page 20 of 274
piJ represents the Jacobian component that transforms the angular velocity of joint i into
a component of the linear velocity of the end effector at joint configuration θ . Similarly,
wiJ is the Jacobian component that transforms the angular velocity of joint I into a
component of the angular velocity of the end effector at joint configuration θ .
The number of columns of the Jacobian represents the number of degrees of freedom or
links of the manipulator. There are always three rows for linear velocity in the x, y and z
directions, and three for angular velocity. Hence, for a six degree of freedom
manipulator, the Jacobian is a 6 by 6 square matrix.
The Jacobian can be calculated from the following equation. As the design is purely
revolute, only the bottom part of the equation is required.
−×
=
−
−−
−
1
11
1
)(
0
i
ii
i
i
Pi
z
ppz
z
J
J
ω
(3.28)
1−iz is taken from the z component of the rotation component of the transform. Ti0 is
calculated as described in section 3.1.7, and p is taken from the position component of
Ti0 .
3.3.1 Jacobian Input Parameters
For the purpose of calculating the Jacobian matrix for the final arm design, the following
parameters are required. Note that if the DH parameters of the arm are modified, the
for prismatic joints
for revolute joints
Page 21 of 274
following parameters need to be generated for the new design. Generating the Jacobian
requires two primary sets of inputs iz and ip , as seen in equation 3.28.
The zi components are as follows:
=
1
0
0
0z
−
=
0
1
1
1 c
s
z
−
=
0
1
1
2 c
s
z
−
−−
−−
=
23
321321
321321
3
c
cssscs
cscscc
z
−−−
−−−
+−−
=
43232
414321321
414321321
4
)(
)(
)(
ssscs
ccssssccs
csssscccc
z
+−−−−
+−−−
+−+−
=
53232543232
53213215414321321
53213215414321321
5
)()(
)())((
)())((
cccssscsccs
ccssscsssccsssccs
ccscsccssscsscccc
z
−−++−+−
−−−−++−−
+−−−+++−
=
643232653232543232
6414321321653213215414321321
6314321321653213215414321321
6
)())()((
))())()(((
))(())())(((
cssccsssccssccsccs
cccssssccssscssscscsccsssccs
ccssssccccsscscscccsscsscccc
z
The ip components are as follows:
=
0
0
0
0p
=
1
11
11
1
d
sa
ca
p
+−
+
+
=
122
11221
11221
2
dss
sacas
cacac
p
Page 22 of 274
+−−−
++−
++−
=
122332332
1223323321
1223323321
4 )(
)(
dsasaccas
acasascacs
acasascacc
p
+−−−−
−−+−+−
−−+−+−
=
122332332323324
1223323323243241
1223323323243241
5 )(
)(
dsasaccasccsssd
acasascaccsdscds
acasascaccsdscdc
p
+−−−−++−−−−++
−+−−++−−−++−
++−−++−+−
=
12234233243232653232543232
112213321
33214321321653213215414321321
1122133213321
43214321321653213215414321321
6
)())()((
)())())(((
))())())(((
dsasascasdccssdcccssscsccs
sacassass
cacsdcssscsdccssscsssccsssccs
cacscsacccacc
dcscdcsssccdccscsccssscsscccc
p
When these initial conditions are applied to equation 3.28, the Jacobian is obtained. The
result of the calculation is confirmed using the robotics toolbox (Corke 2001).
Once the Jacobian is obtained, the inverse Jacobian can be calculated. This provides joint
space velocities corresponding to the velocity of the end-effector.
)(1 θθ −= Jɺ )(θυ (3.29)
As previously stated, the above equation enables a method for the robot to play a table
tennis shot, not only at a given position but at a specified linear and rotational velocity.
Page 23 of 274
3.4 Singularities
The Jacobian can also be used to indicate possible configurations at which singularities
are present. Singularities are manipulator configurations in which one or more degrees of
freedom are made redundant. This reduces the ability of the robot to move in 3D space
close to a singularity, even though the area could be well within its workspace. In order
to calculate the joint velocities necessary to produce a given Cartesian velocity the
Jacobian matrix is inversed. If the inverse Jacobian is applied close to a singularity, the
joint velocities approach infinity (Craig 2005). For this reason, it is essential that the
robot be designed so that it operates away from singularities, both boundary and internal.
This is particularly important if the inverse Jacobian is to be calculated in a real time
system.
3.4.1 Boundary singularities
These occur when the manipulator is operating close to the boundaries of its workspace
(Siciliano 1999). This can occur if the arm is fully stretched out or contracted. For the
purpose of this design, the manipulator will not be operating close to the outstretched
boundary. However, the retracted boundary is a potential difficulty. The shots that pose a
problem are those near the centre, at the top of the hitting plane. It is unlikely that the
arm will be able to perform high velocity shots at this position. This is not considered to
be a problem, as the design of the table and the rules of ROBAT make it impossible for a
high velocity shot in this area to be valid.
Page 24 of 274
3.4.2 Internal singularities
Internal singularities are more of a concern than the external singularities. These are
present inside the reachable workspace of the manipulator. Internal singularities mostly
occur from two or more axes aligning (Siciliano 1999). The manipulator has been
designed so that none of these occur in the current workspace. The first joint axis has
been positioned perpendicular to the hitting plane for this reason.
3.5 Trajectory Generation
The trajectory of a particular shot gives the position of each of the joints of the
manipulator at each point in time during the shot. Each shot the robot makes is from a
ready position, with zero initial velocity and acceleration. The ready position shown in
Figure 4-2 is found to be the configuration of the arm that allows all the desired shots to
be played with minimal required joint torques on the arm. The end conditions of the shot
are determined based on the ping-pong ball trajectory given by the vision system. A
given shot is first specified by a T matrix, which gives position and orientation of the
shot, as well as a velocity vector. The end conditions for each joint trajectory are first
calculated using the inverse kinematics to obtain joint positions, then applying the results
to the inverse Jacobian in order to obtain joint velocities.
There are two different approaches that can be taken to specify the motion of the robot,
joint space and operational space. Operational space involves mapping a path of the end
effector in 3-D space and then using the inverse-kinematics and Jacobian to determine
each joint variable that is required to produce that trajectory. This approach is most useful
if there if there are areas of the workspace of the robot that the end effector needs to
avoid. However, operational space trajectories involve online computation of the inverse-
kinematics, which will increase the computation time of the control system.
Page 25 of 274
Joint space trajectory generation involves mapping out individual trajectories for each
joint. In table tennis, only the end conditions of the shot are important. The actual
trajectory of the shot affects variables such as motor torques and workspace restrictions.
However, as long as the shot is valid and able to be defined for control purposes, any
joint space trajectory that results in the realisation of the end conditions is equivalent.
Joint space trajectories are much more efficient to implement online as the inverse-
kinematics and jacobian are only required to be evaluated once at the initial generation of
the trajectory.
Trajectory generation is a complicated area of research, and hence for the purpose of arm
design and testing a simplified approach has been taken, which will be expanded upon
when a working manipulator has been completed. The approach used is to map a
trajectory using single quintic polynomial, which gives full control over the end
conditions to be specified for position velocity and acceleration. The quintic polynomial
is solved for these shot conditions using the following coefficient definition expressions:
ia θ=0 (3.30)
ia θɺ=1 (3.31)
22
iaθɺɺ
= (3.32)
3
2
32
))3()128(2020(
t
tta
fiifif θθθθθθ ɺɺɺɺɺɺ −−+−−= (3.33)
4
2
42
)23()1614(3030(
t
tta
fiffi θθθθθθ ɺɺɺɺɺɺɺ −+++−= (3.34)
5
2
52
))()66(1212(
t
tta
fiifif θθθθθθ ɺɺɺɺɺɺ −−+−−= (3.35)
The major problem using this method is that when non-zero velocity and acceleration
parameters are applied, the position of the joint moves outside the position boundaries.
This is a problem as the joints have limits as to how far they can move, which is
determined by the physical arm design. Hence a single polynomial cannot be relied upon
to generate trajectories in a real time situation, as joints hitting their limits could occur.
Page 26 of 274
For design and testing purposes the velocity and acceleration end conditions have been
set to zero to avoid the above situation. Shots with non-zero end velocities first need to
simulated offline to ensure the robot is capable of realising the shot under the current
scheme.
Trajectory generation is a very interesting component of playing table tennis, however in
order to explore the field more extensively a completed arm design as well as a robust
control system is required. The latter two objectives are considered to have a greater
immediate value and hence are the focus of this project.
3.6 Dynamics
The manipulator dynamics involve defining a series of differential equations that describe
the dynamic behaviour of the robot (Turney 1980). The dynamics relate the position of
the joints and their derivatives to forces and torques acting on these joints. The dynamics
of the system need to be evaluated for purpose motor selection and control. Given set of
desired trajectories and robot kinematics, dynamic equations can be used to determine the
torques required to achieve the requested motions or trajectories. From these calculations,
motors can be selected that satisfy the torque and angular velocity requirements.
In order to calculate robot dynamics, the physical parameters of the entire manipulator
need to be fully defined. This poses a problem for motor selection. Motor selection is
dependent upon the torque and angular velocity requirements of the motors. The torque
and angular velocity requirements of the motors depend on the physical properties of the
manipulator, which change depending on the selected motors. Thus an iterative approach
is necessary.
Page 27 of 274
3.6.1 Methods for Solving Robot Dynamics
There exist two main groups of methods that are used to evaluate the dynamics of serial
manipulators. Many variations of each method are available, however the main two are
the Lagrangian method and the Netwon-Euler. In early robotic control the computation of
the robot dynamics equations has been a bottleneck in real time control schemes, mainly
because of the rather inefficient formulation of Lagrangian dynamics equations. The
Lagrangian method solves the dynamics problem using a force balance approach of the
entire manipulator (Lee 1983). Alternatively, the Newton Euler formulation is based on a
balance of forces acting on a generic link of the manipulator. This approach allows a
recursive solution, which propagates the velocities and accelerations from the base to the
end-effector, followed by a backward propagation for the forces and moments (Siciliano
1999). The following table, taken from Hollerbach (1980), gives a comparison of the
efficiency of various methods:
Table 3-1: Comparison of dynamics formulations
Multiplications Additions
Uiker/Kahn
12853
17112/86532
31
24134
21
−+
++
n
nnn
9642
1296625
31
2213
314
−+
++
n
nnn
Waters
512620106 212
21 −+ nn
38451282 2 −+ nn
Hollerbach (4x4) 592830 −n 464675 −n
Hollerbach (3x3) 277412 −n 201320 −n
Newton-Euler 48150 −n 48131 −n
Horn, Raibert 232 nn + nnn 223 ++
Page 28 of 274
Table 3-2: Formulations applied to a six degree of freedom arm
Multiplications Additions
Uiker/Kahn 66271 51548
Waters 7051 5652
Hollerbach (4x4) 4388 3586
Hollerbach (3x3) 2195 1719
Newton-Euler 852 738
Horn, Raibert 468 264
The top two methods in the above table are variations of the Lagrangian method which
contain considerably more operations than the bottom two, which are both Newton-Euler
style solutions (Hollerbach 1983). The Horn, Raibert method is faster than the Newton-
Euler method. However, the standard Newton-Euler approach is much simpler to
implement. Therefore the Newton-Euler dynamics are implemented in this project. As
shown later in Section 5, the Newton-Euler equations, although slower than those of
Horn Raibert, can successfully be implemented real time using current computers.
3.6.2 Newton-Euler implementation
The Newton-Euler method for solving robot dynamics requires certain parameters to be
defined (Siciliano 1999). These include the physical parameters of each link, which are
defined with respect to the coordinate system of the attached link, the kinematics
parameters of the relative motions between the links, as well as the forces and moments
acting on the link joints. To further improve the accuracy of the model, the properties of
the actuators, such as rotor inertia and friction, are taken into account.
Page 29 of 274
Figure 3-4: Parameters for Newton-Euler formulation (Siciliano 1999)
3.6.2.1 Physical Parameters
im : mass of augmented link
iI : inertia tensor of augmented link
miI : moment of inertia of rotor
Ciir ,1− : vector from origin of frame (i-1) to centre of mass Ci
Ciir , : vector from origin of frame i to centre of mass Ci
iir ,1− : vector from origin of frame (i-1) to origin of frame i
3.6.2.2 Velocity and Acceleration Parameters
Cipɺ : linear velocity of centre of mass iC
Page 30 of 274
ipɺ : linear velocity of origin of frame i
iω : angular velocity of link
miω : angular velocity of rotor
Cipɺɺ : linear acceleration of the centre of mass Ci
ipɺɺ : linear acceleration of the origin of frame i
iωɺ : angular acceleration of link
miωɺ : angular acceleration of rotor
0g : gravity acceleration
3.6.2.3 Forces and moments
if : force exerted by link i-1 on link i
1+− if : force exerted on link i+1 on link i
iu : moment exerted by link i-1 on link i with respect to origin of frame i-1
3.6.2.4 Recursive Algorithm for joint torque calculation
The equations relating to the forward propagation of link velocities and accelerations:
)( 0
1
1
1 zR i
i
i
Ti
i
i
i υωω ɺ+= −−
− (3.36)
)( 0
1
10
1
1
1 zwzR i
iii
i
i
Ti
i
i
i ×++= −−
−−
− υυωω ɺɺɺɺɺ (3.37)
Page 31 of 274
)( ,1,1
1
1
1 i
ii
i
i
i
i
i
ii
i
i
i
i
Ti
i
i
i rrpRp −−−−
− ××+×+= ωωωɺɺɺɺɺ (3.38)
)( ,,
i
Cii
i
i
i
i
i
Cii
i
i
i
i
i
Ci rrpp ××+×+= ωωωɺɺɺɺɺ (3.39)
11
1
11
1
1 −−−
−−−
− ×++= i
mi
i
iiri
i
miiri
i
i
i
mi zqkzqk ωωω ɺɺɺɺɺ (3.40)
The equations relating to the backward propagation of moments and torques:
i
Cii
i
i
i
i
i
i pmfRf ɺɺ+= +++1
11 (3.41)
)()( ,
1
11
1
11.,1
i
i
i
i
i
i
i
i
i
i
i
Cii
i
i
i
i
i
i
i
i
i
Cii
i
ii
i
i
i
i IIrfRRrrf ωωωµµ ×++×+++×−= +++
+++− ɺ
i
mi
i
imiiir
i
mimiiir zIqkzIqk 1111,1111, ++++++++ ×++ ωɺɺɺ (3.42)
)sgn(11
0
1
isiii
i
mi
Ti
mimiri
Ti
i
iT
ii FFzIkzR υυωµτ υ ɺɺɺ +++= −−− (3.43)
Page 32 of 274
3.6.2.5 Physical Constants
The Newton-Euler equations contain a number of constant parameters which describe the
physical arm. The mass, centre of mass and inertia parameters were determined from a
CAD model of the manipulator constructed using AutoDesk inventor, as described in
Section 7. Parameters such as rotor inertia and friction coefficients are taken from the
specifications of the chosen motors. However, friction will also be present in the bearings
and joints, and there is no reasonable method of predicting this. These values will be
found experimentally when the arm is functional. Listed below are the physical
properties of the final arm design, that along with the DH-parameters contain all the
information required to calculate the robot dynamics at a given joint state iq , iqɺ , iqɺɺ .
Note that any change to the arm design requires these parameters to be re-evaluated.
Inertial tensors: These include inertial parameters of the link itself, and the motor
which drives the subsequent links. The inertia tensor for link i is taken at the centre of
mass of link i with respect to the coordinate system of frame i.
−−
−−
−−
=
zzyzxz
yzyyxy
xzxyxx
i
III
III
III
I
Inertial tensors for the final arm design (kg/m2):
−−−
−
−−
=
2-3.47e355.2358.1
355.23-8.93e0
358.102-3.454e
1
ee
e
e
I
−−
−−−
−−−
=
2-1.75e373.1373.2
373.12-2.072e303.2
373.2303.23-6.283e
2
ee
ee
ee
I
=
3-2.194e00
03-1.415e0
006-1.901e
3I
=
2-1.226e00
06-746e0
00 2-1.292e
4I
Page 33 of 274
=
6-912.9e00
06-653.2e0
003-1.463e
5I
=
3-2.505e00
06-88.44e0
003-2.577e
6I
Centre of mass vector: This specifies the centre of mass for link i with respect to the
coordinate system of frame i.
=
z
y
x
i
Cii
r
r
r
r ,
Centre of gravity vectors for final arm design:
−
−
=
29
2.39
1.1061
1,1 Cr
−=
7.33
8.19
5.2342
2,2 Cr
−
−
=
65.8
0
9.303
3,3 Cr
=
0
224
04
4,4 Cr
=
2.25
3.4
05
5,5 Cr
−=
0
8.133
06
6,6 Cr
Mass of each Link
m1 = 1.584 kg
m2 = 1.613 kg
m3 = 0.802 kg
m4 = 0.557 kg
Page 34 of 274
m5 = 0.497 kg
m6 = 0.243 kg
Motors axis: All axes of rotation for the six motors coincide with the z axis of their
respective coordinate axes, and therefore:
=−
1
0
01i
miz for all links 1-6
3.6.2.6 Dynamic Boundary conditions
Gravity: This takes into account the effect of gravity on all links, with an acceleration of
9.81m/s2 in the negative x direction (which is vertically upwards, due to the initial choice
of coordinate system). This is equivalent to applying the acceleration to each link
individually vertically downwards. This gives the initial condition of:
−
=
0
00
0
g
pɺɺ
Base Velocities: Since frame zero is stationary relative to the world, the angular
velocities and accelerations are set to zero for that frame.
=
0
0
00
0ω
=
0
0
0
ωɺ
External forces: An external force can be taken into account as acting on the end-
effector, which in this case is the bat. This force can be done through applying a force
vector to the end effector.
Page 35 of 274
=
z
y
x
f
f
f
f 66
For the purposes of motor selection, it is assumed that this condition is zero, as the force
on the bat due to the ball impact is negligible in comparison the inertial forces acting on
the manipulator.
Friction: Friction is included in the final torque equation (3.43). υF denotes the viscous
friction and )sgn(qFs ɺ represents the coulomb friction components. These two parameters
represent the main component of error in the dynamics model due to the large amount of
friction in the high ratio gear boxes selected for the motors.
Motor inertia: The final set of initial conditions relate to the motors and the armature
inertia. These values are taken from the data sheets of the selected motors. The units of
the following values are Kgm2.
Link 1 motor: 6.28x10-6
Link 2 motor: 6.03 x10-6
Link 3 motor: 6.03 x10-6
Link 4 motor: 0.403 x10-6
Link 5 motor: 0.403 x10-6
Link 6 motor: servo
Page 36 of 274
4 Simulation and Testing
In the initial stages of testing and design, only the kinematic properties of the arm are
considered. The aim of this is to find a manipulator design that is compact and able to
achieve the shots necessary to play table tennis. The requirements are that it is able reach
every position within a hitting plane, which is represented by the brown square in Figure
4-2. Additionally, the arm is required to achieve the position at all orientations that could
be expected of a practical shot.
In order to investigate the capabilities of numerous different arm designs, a program has
been developed that allows the user to simply enter the DH parameters of the manipulator
defined in Section 3.1.5. The program calculates the inverse kinematics and Jacobian,
based on the end-effector positions taken from around the boundary of the hitting plane.
From these conditions, a basic trajectory is plotted. Early testing relied heavily upon the
robotics toolbox (Corke 2001) for visualization of the shots, as it is able to generate
representations of any arm design. Some modifications have been made to the toolbox
that allow the shot to be viewed from different angles. The hitting plane has also been
added to the plot to confirm that the correct shots were being played.
Page 37 of 274
Figure 4-1: Output from modified toolbox
4.1 Virtual Reality Models
After the completion of the kinematic design, a virtual reality model was developed.
Using the VR model a large cross-section of possible shots have been applied to the
model to ensure that each trajectory is valid and that no internal collisions occur. The
model has been made using the VR-builder in Matlab.
Page 38 of 274
Figure 4-2: VR model (arm shown in ready position)
The brown square in Figure 7 represents the hitting plane on which the bat is expected to
make contact with the ball. To give an understanding of the orientation of the individual
coordinate systems during robot motion, the coordinate systems were added to the initial
VR-model using the following colour convention
Red: x-axis
Green: y-axis
Blue: z-axis (axis of rotation)
Page 39 of 274
After the completion of the physical design, the VR model was upgraded to include the
actual dimensions of the arm. This model was used to further verify that the design was
capable of achieving the range of motions specified.
Figure 4-3: VR model of physical arm arm shown in ready position)
Page 40 of 274
4.1.1 Graphical User Interface
The following GUI is designed in order to manually move and control the virtual arm or
play pre-recorded shots, without having to directly modify code:
Figure 4-4: GUI driver for VR model
4.1.2 Manual Control
Manual Control refers to the bottom panel in Figure 4-4. The sliders under manual
control allow the user manipulate each joint of the arm either by moving the slide or
entering the desired position in the edit bar. The slider and edit bars also track the
Page 41 of 274
position of each joint during a pre-programmed shot. The “reset to ready position”
button resets the arm to its start position and updates all the sliders.
4.1.3 Programmed shots
Programmed shots can be selected using the top panel in Figure 4-4. It is essential that
the arm has the ability to play all shots on the hitting plane. To determine if the arm is
able to achieve these shots, the plane has been split into a grid. The “select horz coord”
and “select vert coord” lists allow the user to choose a coordinate on the hitting plane
they wish to move the arm to. Similarly, “select shot angle” allows the angle of the shot
to be selected. Once the shot parameters are chosen they can be calculated by pressing
the “load shot” button.
The “load shot” button runs an m-file, loadshot.m, and passes the parameters that have
been selected. The m-file calculates the inverse kinematics, Jacobian, shot trajectory and
the manipulator dynamics as described in previous sections. The shot trajectory is then
calculated and the results are saved into a matfile. The shot can then animated using the
following Simulink model which was later applied to the Matlab simulation of the control
system describe in Section 5.
Page 42 of 274
point1.rotation
point2.rotation
point3.rotation
point4.rotation
point5.rotation
point6.rotation
VR Sink
loadshot.mat
From File
[0 0 -1]
Constant6
[0 1 0]
Constant5
[0 0 -1]
Constant4
[0 -1 0]
Constant3
[0 0 1]
Constant2
[0 -1 0]
Constant1
Figure 4-5: Simulink connection to VR model
4.2 Results of robot simulation
The robot simulation has been implemented for the purpose of arm design and motor
selection. In order to properly select the required motors, the dynamic equations,
described in Section 3.6, are applied to a large set of preplanned trajectories in order to
calculate the maximum expected torques on each link, as well as the velocities at which
they occurred. Shown below are the expected torques acting on each link joint for a
cross-section of shots into the hitting plane. This torque analysis is heavily relied upon by
the motor selection. Arm designs are iterated over until the expected torques come within
range of the actuators that current design is modeled for.
Page 43 of 274
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2-20
0
20Link 1
Nm
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2-5
0
5Link 2
Nm
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
-505
Link 3
Nm
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2-0.5
0
0.5Link 4
Nm
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2-0.5
0
0.5Link 5
Nm
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2-5
0
5x 10
-3 Link 6
Nm
Figure 4-6: Torque Plots
As each simulation is run, the maximum joint, angles, positions, accelerations and
torques are recorded.
Page 44 of 274
Table 4-1 shows the highest magnitudes of position, velocity and torque that the final arm
is designed to operate under. Note that the results are taken for a shot time of 0.5 seconds.
Page 45 of 274
Table 4-1: Data set for shot time of 0.5 seconds
JOINT 1 JOINT 2 JOINT 3 JOINT 4 JOINT 5 JOINT 6
Ready Position(rad) 0.0000 -0.8456 1.0473 0.0000 1.7725 0.0000
Max Angle(rad) 1.030 0.198 1.235 1.218 2.550 0.9867
Min Angle(rad) -1.030 -0.845 -0.562 -1.218 0 0.9867
Motion Range(rad) 2.060 1.043 1.797 2.436 2.550 1.9734
Max Velocity(rad/s) 3.863 3.914 0.7049 4.569 2.917 3.700
Min Velocity(rad/s) -3.863 0.0000 -6.037 -4.569 -5.746 -3.700
Max Torque(Nm) 12.57 4.560 6.565 0.363 0.251 0.00148
Min Torque(Nm) -10.80 -3.863 -0.2773 -0.328 -0.403 -0.00148
Page 46 of 274
5 Control System
The type of control structure that is required for the control of a serial manipulator
depends greatly on its design and application. For the purpose of this project, the aims of
the control system are:
• A fast response: the arm is required to complete a shot in around 0.5 seconds.
• Effective trajectory following: this is essential, as the planned trajectories are
designed to avoid collision and stay with the joint movement limits of the arm.
Additionally the actuators were selected based on the dynamics of the planned
trajectories and any major variance from those trajectories might exceed the
torque limits of the actuators.
• Robustness and stability: This is the most important requirement for the arm,
since large torques occur during its motion, and instability may cause severe
damage to the robot.
The major problem associated with the control of a serial manipulator is that it is a highly
non-linear and coupled system, due to the changing arm configurations and relative
motion between the links. Some control schemes attempt to linearise the control
equations, however due to the large range of motion that the manipulator needs to cover,
this solution provides unacceptable results. At slow speeds, the non-linear dynamics
terms due to the coupling effects of the changing arm configuration are small. For many
industrial applications that do not required high speeds, PID control in joint space is
sufficient to control the arm. Due to the simplicity of PID control, which does not require
any knowledge of the system, it is still the most widely used form of robotic control in
the robotics industry (Craig, 2005).
As the speed of the manipulator increases, the coupled dynamic terms become more
significant, thus making PID control impractical for the application of ping-pong.
Page 47 of 274
To achieve accurate control at high speeds, the non-linearities and coupling need to be
taken into account and effectively cancelled. This type of control is called computed
torque or inverse dynamics control and it based on the real time use of the dynamic
model of the robot and current state (Siciliano 1999). Computed torque control linearises
and decouples the dynamic equations, providing uniform behaviour independent of the
configuration of the arm (Khalil). This method of control is considered the ideal solution
for controlling serial manipulators but has two major drawbacks. The first is that it
requires a complete dynamic model of the arm. Inaccuracies in the model cause inherent
errors in the control system. Having the requirement of a full dynamic model is a
deterrent in many industrial applications, as any changes to the robot require
recalculation of the robot parameters. The second major drawback is that computed
torque control requires the online calculation of the robot dynamics, which until recently
has not been computationally practical. However, the speeds of current computers are
such that this drawback is no longer a major problem (Khalil).
Computed torque control can be implemented in either the joint or operational space. For
this application, joint space control has been used because the trajectories are calculated
in joint space, negating the need for online computation of the Jacobian. The following
diagram, adapted from Siciliano, shows the control scheme for computed torque control:
Page 48 of 274
Figure 5-1: Torque Control Scheme
This control scheme comprises of two main components, stabilising linear control and the
non-linear compensation/decoupling. The decoupling component takes into account the
current state of each joint of the robot in the form of velocity and position. Using the
current state of the robot, the dynamic model of the robot is used to determine the torques
required to achieve the acceleration set point )(int iqsetpoɺɺ . This step effectively linearises
the outer stabilising loop, meaning that a change in )(int iqsetpoɺɺ will affect )(iqactualɺ and
)(iqactual independently of the configuration of the rest of the joints. Theoretically, if the
dynamic model of the robot is perfect then computed torque control may be run without
the stabilising outer loop. However, due to factors such as friction and non-ideal actuators
the outer loop is needed to ensure the desired trajectory is accurately followed.
intsetpoqɺɺ Γ
actualq
desiredq
actualqɺ
desiredqɺɺ
+
+
Kd
Kp
Real-time
Dynamics Robot Plant
Stabilizing Linear Control Non linear compensation /
decoupling
desiredqɺ
Page 49 of 274
5.1 Control Simulation
The effectiveness of the control system has been tested though the implementation in
Simulink as shown in figure 5.1. The trajectory generation, torque calculations and plant
have been implemented using m-functions.
Stablizing Linear Control
Trajectory Generation
Non-linear componsation
and decoupling
Plant
+ev
kv
+ep
kp
acceleration Setpoint
q
dq
Joint Torques
Torque Calculation
In1acutal x
actual v
Robot
0
Clock
Acceletation
Velocity
Position
Figure 5-2: Simulink simulation of control system
5.1.1 Plant
Inverse dynamics is used to simulate the actual robot, or plant. Inverse dynamics
calculates the acceleration of each link of the robot, given the torques acting on each link
as well as the current joint states.
Page 50 of 274
2
actual v
1
acutal x
simout
To Workspace
Scope1
In1 Out1
Saturation1
Joint Position
Joint Acceletation
1
s
Integrator1
1
s
Integrator
MATLAB
Function
Direct Dynamics
1
Joint Torques
Figure 5-3: Manipulator Plant
The Inverse Dynamic algorithm, as described by Siciliano, produces the joint
acceleration via equation 5.1.
)')((1 ττ −= − qBqɺɺ (5.1)
Where 'τ is found by applying the Newton-Euler equations with the input parameters
been the current joint states ( q and qɺ ) and the acceleration (qɺɺ ) for each link set to zero.
The matrix B(q) is a square matrix of dimensions equal to the number of links of the
manipulator. Matrix B(q) is constructed via the calculation of each column B(i). B(i) is
found by applying the Newton-Euler equations with the with the following conditions.
g0 = 0 (gravity): (5.2)
0=qɺ : (5.3)
1)( =iqɺɺ and 0)( =jqɺɺ for ij ≠ (5.4)
Hence iterating the dynamic equations for the above parameters leads to the construction
of the B(q) matrix.
Page 51 of 274
Once B(q) is found, equation 5.1 is applied to the input torques resulting in the
accelerations acting on each link. These accelerations can then be integrated twice in
order to find the associated positions and velocities on each link. The major problem in
using the direct dynamics as a simulator plant is that the algorithm involves n+1 Newton-
Euler calculations, where n is the number of links. Thus it is extremely slow.
5.1.2 Control Simulation Results
In order to test the robustness and stability of the control system, random errors are
introduced into the physical parameters of the arm used to calculate the direct dynamics.
Additionally the torques given to the plant are saturated at the maximum calculated
values as shown in
Page 52 of 274
Table 4-1. Noise is also introduced to the input torques. The following results are
obtained by running a single trajectory using a plant, with random errors in the physical
parameters of the plant of up to 30% from those used in the control system. Figure 5-4
shows the control system running with the outer loop open, with the error gains set to
zero. The red and blue plots represent the desired and actual joint positions respectfully.
0 0.5 1 1.5 2 2.5 3-2
0
2Link 1
rad
0 0.5 1 1.5 2 2.5 3-101
link 2
rad
0 0.5 1 1.5 2 2.5 3024
link 3
rad
0 0.5 1 1.5 2 2.5 3-202
link 4
rad
0 0.5 1 1.5 2 2.5 30510
link 5
rad
0 0.5 1 1.5 2 2.5 3-505
link 6
rad
Figure 5-4: Simulation Results for Joint Position OpenLoop ( plant error 30%)
Figure 5-5Figure 5-4 demonstrates the effect of having a large degree of inaccuracy in the
physical model of the arm. Hence the outer stabilising loop is required in order for the
system to be stable.
Page 53 of 274
The results shown below are generated using the same plant as described above but with
non-zero gains in the outer loop feedback.
0 0.5 1 1.5 2 2.5 3-2
0
2Link 1
rad
0 0.5 1 1.5 2 2.5 3-1
0
1link 2
rad
0 0.5 1 1.5 2 2.5 30
2
4link 3
rad
0 0.5 1 1.5 2 2.5 3-2
0
2link 4
rad
0 0.5 1 1.5 2 2.5 30
5
10link 5
rad
0 0.5 1 1.5 2 2.5 3-5
0
5link 6
rad
Figure 5-5: Simulation Results for Joint Position Closed Loop ( plant error 30%)
As can be seen in Figure 5-5 the torque control method can result in effective trajectory
following even when large errors are introduced into the plant. The system is stable with
the limiting factor being the saturation on the actuators. The model that has been
developed should contain significantly less variation from the actual plant than the 30%
Page 54 of 274
error tested above. Hence the chosen technique of computed torque control is capable of
controlling a serial manipulator for this application.
5.2 Real time implementation of control System
The main problem with computed torque control has traditionally been the computation
time involved. However, this problem has been overcome through implementing the
control system on a separate computer using C++. The control program acts as the driver
program for the other components of the system, such as the vision system and
microcontroller. The program is easily adapted to different serial robot designs of any
degree of freedom. Different robots can be created via a C++ file that contains
information specific to that robot such as DH parameters, link inertias and masses etc.
Below is a flow chart showing an overview of the main control loop of the program.
Figure 5-6: State diagram of control system
Check Cameras
Calculate new trajectory
Evaluate desired joint states
Receives actual joint states
Evaluates joint torques
Convert torques To current
Check for users commands
Calculates new set point
If new camera update If no
camera
update
Page 55 of 274
5.3 Description of Main Control Loop
Note that documentation regarding the main robot class may be found in the Appendices.
Check cameras
At the beginning of every loop the program checks to see if any there are any new
updates available from the vision system computer. These updates come in the form of a
coordinate on the hitting plane, as well as an estimated time until the ball arrives at those
coordinates. In order to save on computation time, a lookup table is called to define the
end positions and velocities of each of the links for that particular shot. Ideally, a planned
shot based on the ball trajectory is desired.
Calculate new trajectory
Based on the end positions taken from the lookup table a trajectory is plotted for each
link based on quintic polynomial, and stored as the coefficients of the polynomial. If a
new update occurs at some point in a shot the current states of each of the links are used
as the starting parameters of the new trajectory. As each new trajectory is planned the
current shot time is reset to zero.
Evaluate desired joint states
The desired position, velocity and acceleration are evaluated by applying the current shot
time to the current set of desired trajectories. The current shot time is evaluated using the
system clock on the control computer.
Read in actual joint states
The current joint states are read in from the microcontroller in the form of encoder ticks
since the last loop. The current position of each joint is evaluated by adding the current
position to the read in number of ticks multiplied by the resolution of the encoder. The
current velocity of each joint is then evaluated by taking the derivative of the position
inputs and running the result though a filter to ensure no large velocity spikes occur. If
Page 56 of 274
there is an error in reading the encoders the faulty input is ignored and the previous
positions and velocities of the previous iteration are kept.
Calculate new set points
The new set points are calculated by applying the gains of each link to the difference
between the desired and actual joint velocities and positions. This result is then added to
the desired acceleration to form the new set point.
Evaluates joint torques
The required joint torques are calculated via the Newton-Euler implementation based on
the acceleration set points and the joint states as describe in Section 2. This is the most
time consuming part of the control loop, and consequently optimising this section was
made a high priority.
Converts torques to currents
The calculated torque values are then converted to current values, which are then sent via
serial communication to the microcontroller to be applied to the driver circuits. The
conversion to current values is done by multiplying the torque values by the motor torque
constant, which is taken from the data sheet of the motors. It is at this point that the
currents are first limited to be within the operating region of each of the individual
motors. Ideally, this current limiting will contain a timer, as each motor can withstand
various current limits for different periods of time without overheating. Overheating
should not be a major concern, as the shots are played in a small amount of time,
followed by a rest period.
Check for user commands and changes of state.
The program is implemented as a state machine, which is described in the next section,
and at the end of each loop the program determines which state it is in either through the
completion of a current shot, or because of a change of user input.
Page 57 of 274
5.4 Control Program GUI
In order to have some control over what the robot is doing at any particular time, the
main control computer is implemented as a state machine that allows user input via a
graphical user interface.
Figure 5-7: GUI for the control program
When the program is first loaded, the GUI is activated in the same state as the above
figure. When the setup button is activated the program loads the selected robot type and
the associated parameters of the robot. During this process all the non state related
variables required to calculate the kinematics and dynamics are determined in order to
save time during online activity. Once all setup calculations have been completed the
current gain and setup time values are loaded into the edit boxes inside the Edit
Page 58 of 274
Parameters component of the GUI. . These values can be edited in order to tune the
controller during testing.
The robot starts off in a hanging position, shown in Figure 5-8, which is outside its usual
range of operation. Once the run button is activated, the arm will rise to the ready
position via the control loop and wait there for further input.
Figure 5-8: Start-up Position
Camera updates are blocked until the user activates the “Ready to Play” button. At this
point, the arm is free to play shots as decided by either the camera input or the manual
input provided by the user via the GUI. Note that manual shots have a default shot time
of 0.5 seconds.
When the “TURNOFF” button is activated, all new inputs to the robot are blocked, both
from the camera and GUI. The robot will complete the current shot it is in, and then
return to the loose hanging position. Once it is in this position, the GUI will display “safe
to turn off” in the “current mode” field. At this point the robot can be turned off, or
started up again by pressing the “Setup” button.
Page 59 of 274
The control program was designed in a modular fashion so that various sections of code
can be tested without the entire system being connected. The program has a number of
simulator modes, such as camera simulation and plant simulation, which give valid
testing data to the control section of the code whilst ignoring the serial connections to the
robot and vision system. Results of the system performance including joint positions,
velocities, torques, and currents are all written to m-files that produce the associated
graphs when run.
The manual control section on the GUI was included for the purpose of debugging and
identifying the characteristics of the current controllers. The slider bars also allow the
user to send torque values to each of the motors. These values range between -255 and
255, corresponding to a current level within the operating range of the motors. This
allows a simple way to request current values from the driver circuits, which can then be
used to map the non-linearities of the output of the current drivers.
5.5 Viability of real-time Implementation
The above control loop has been successfully implemented. While testing the control
system with a robotic simulator (which further decreases the speed of computation) the
controller is able to update the micro controllers 300 times per second. Note that the
figure of 300 cycles per second was generated for the full six degree of freedom arm. If
the number of links is decreased, the program becomes significantly faster. Thus the
chosen control scheme of computed torque control is capable running real-time for the
task of playing ping-pong
5.6 Results of Implemented Control System
The control program has been successfully implemented and has all the functionality that
is required to control a fast moving robot. The program is also very versatile, as it is
Page 60 of 274
capable of controlling any serial manipulator design, without modification, assuming the
electronics and hardware are properly implemented so that the calculated torques can be
realised. Unfortunately, the non-linear characteristics of the current control drivers
purchased for this project were not taken into account. This meant that the currents
requested by the control system via the microcontrollers are not being sent to the motors
reliably. The non linear characteristics are especially bad at low levels of current, which
is the region the motors mostly operate in.
Under the above conditions, the method of torque control does not function. Torque
control assumes an accurate physical model of the system, as well as the dynamics
performing to the predicted behaviour. If the robot falls too far from its predicted path
during the trajectory the torque control system cannot compensate. Torque control does
not function like PID, which will increase the current to the motors until the set point is
reached. This is a desired characteristic in the control of a serial manipulator as it is
undesirable for large torques to be exerted to motors if the current configuration of the
arm is not completely specified, as collisions may occur.
Different attempts have been made to try and compensate for the non-linear current
drivers and the results have confirmed the above discussion. Computed torque control
provides an acceleration set point for each link that drives the robot along a trajectory.
This set point is weighted by errors in position and velocity, which keeps the robot on the
correct trajectory. However if incorrect currents are sent to the motors at the start of the
trajectory, the robot falls too far from its predefined trajectory, the acceleration set point
becomes meaningless, and the control system will be dominated by the error terms. If the
error gains are set high enough, the controller can overcome the poor current driver
characteristics in some cases, but each different shot type requires a specific set of gains.
This is impractical and negates the intention of an intelligent control system that should
operate effectively, independent of the robot configuration.
The control system, including the interfaces with both the microcontroller and vision
system, operates as expected and at sufficient speeds to control a ping-pong robot. In
Page 61 of 274
order to convert this control application into realised system, a different method needs to
be found to translate calculated torque values into actual torques on the motors. The
system could be further refined through further system identification of the physical
parameters of the robot such as the friction coefficients.
Page 62 of 274
6 Arm Development
The development of a robotic arm which meets the requirements of this project involves a
series of steps. The first is to conduct some initial research into how robotic arms
operate, and which type of arm would be suitable for this project. From this it is possible
to choose the type of arm required and how many degrees of freedom are needed. Once
this has been completed, the link lengths and joint locations are mapped out. This is
optimised to ensure that the arm meets all of its requirements while remaining as small
and thus lightweight as possible. Finally, the arm can then be modelled on a CAD
program and the final design completed.
6.1 Requirements
The robotic arm needs to conform to a set of basic requirements which are listed below:
• Able to reach all points within a square 0.5 m workspace
• Able to hit a table tennis ball at differing angles up to 40o
• Adhere to the rules of ‘ROBAT’
The rules of ROBAT limit the area through which the ping pong ball can travel. The ball
must travel through a 0.5 metre square at the end of the table and this is the reason a 0.5
metre workspace was chosen.
It was decided that a shot angle of 40o would be sufficient to return all possible shots
back through the required hitting area and potentially win the point.
Page 63 of 274
Finally the robot must adhere to the rules of ROBAT in order to ensure that it is legal in
any ROBAT competition.
6.2 Research
Five types of arm design have been researched in order to make an informed decision on
what type of robot to use. These are rectangular, cylindrical, spherical, jointed and
SCARA. These are shown respectively in the figure below.
Figure 6-1: Five different arm designs
Page 64 of 274
By looking at the game of ping pong and taking into account the requirements, it is
obvious that a six degree of freedom arm is needed. This is because three degrees are
needed to position the bat at any point in the hitting plain and then move through this
point. Then the remaining three degrees of freedom are used to angle the bat to ensure
the shot remains on the table.
A six degree of freedom jointed arm configuration is the best arm type while still
allowing all requirements to be met.
6.3 Design Description
The final arm design, after the above investigations, is similar to the PUMA robot that
was used by the ping-pong team (Andersson 1988). The idea of eliminating one or more
of the links in order to reduce the cost of the arm was abandoned, as it greatly reduces the
robot’s ability to achieve the required shots. Hence a six degree of freedom arm was
decided upon. Each link of the robot is connected using revolute joints. This is done to
reduce the complexity in modelling the arm and for construction. Prismatic joints are
more complex to implement, and make the arm significantly heavier.
There are three main differences between the current design and that of the original
PUMA arm. The first is the way the arm is mounted. Andersson’s team mounted the
first link so that it was parallel with to the hitting plane. In this project, the first link is
mounted perpendicular to the hitting plane, in order to avoid a possible singularity and
produce a more natural looking hit. The singularity was not a problem for Andersson’s
team as their arm was significantly larger than required. This meant that the arm never
extended to a point where the singularity was present. The second main difference is that
the original PUMA robot was not symmetric. In order to simplify trajectory planning, the
project arm is designed to be symmetric. This makes the motor placement and physical
Page 65 of 274
design significantly different to the PUMA. The third difference is that the last three
coordinate frames (the wrist) of the original PUMA are coincident. This is desirable as it
makes the inverse kinematics simple. However, it is not practical for the mechanical
design of the project arm, due to the motor placement changes caused by the symmetry of
the arm.
The second to last joint of the arm is designed to be the hitting joint. This is done so that
the arm is still be able to play shots with only one motor if the trajectory mapping
becomes too complicated. The final link is required to produce a large velocity but with
little force, as the ball has negligible mass. For this reason the final link length will be
longer.
The final joint is used solely to orientate the bat. The second and third joints are the main
positioning motors and will provide the major component of the velocities of the bat if
dynamic shots are implemented. The first link is used to move the rest of the arm into the
plane in which the shot is going to be played.
Page 66 of 274
6.4 Final Design Parameters
6.4.1 DH parameters
Table 6-1: DH Parameters
Link number iα ia id iθ
1 -2
π 1a 1d 1θ
2 0 2a 0 2θ
3 -2
π 3a 0 3θ
4 -2
π 0 4d 4θ
5 2
π 0 0 5θ
6 2
π 0 6d 6θ
Page 67 of 274
Figure 6-2: Representation of DH parameters
6.4.2 Link Lengths
The link lengths of the robot are chosen to minimize the overall size and weight of the
arm. The arm is required to reach any point in the hitting plane at angles between 0 and
40 degrees. Choosing the link lengths is mostly a trial and error process. Original
lengths were initially chosen and simulated. New lengths are then chosen to overcome
any problems highlighted by the simulation.
z4 x4
a1
z1
d4
y2
d6
y6
z6
x5
z5
y5
y4
y3 z3
x3
x2
z2 y1
x1
y0
x0
z0
a2
d1 a3
Page 68 of 274
The finalised arm lengths satisfy all the conditions, using a hitting plane 20% higher and
wider than required. The arm is designed with this extra reach in order to take into
account dynamic shots and follow-through.
Table 6-2: D-H parameters, including finalized lengths
Link number iα ia id iθ
1 -2
π 0.1 0.1 1θ
2 0 .35 0 2θ
3 -2
π 0.1 0 3θ
4 -2
π 0 0.4 4θ
5 2
π 0 0 5θ
6 2
π 0 0.3 6θ
Page 69 of 274
6.4.3 Reach Capabilities of the manipulator
Figure 6-3: Reach capabilities of arm
Page 70 of 274
7 Mechanical Design
Once the final characteristics of the arm are defined as in the previous section, it a
detailed mechanical design must be started. This again involves a few critical steps. The
first is to design a build a ‘Test Rig’. The main purpose of the test rig is to act as a
platform to test the control of the motors, but it also serves to test some of the
construction ideas which can then be used for the robot arm. The second step is to decide
on what types of links and joints are to be used and create a rough model with these.
Finally, this rough model is cleaned up and details added, resulting in a final robotic arm
design
7.1 Test Rig
The aim of the test rig is to design and build a simple two degree of freedom arm with
revolute joints. This exercise is to be used as a “practice run” for the completed robot,
and when built will allow the control system to be tested.
7.1.1 Design
The rig is designed in the same way as the arm, by first defining its required features and
then selecting a design to meet these requirements. The design which was chosen is
shown below (Figure 7-1).
Page 71 of 274
Figure 7-1: Test rig (Front on)
Motors for the test rig were sourced and purchased from Jaycar Electronics. These were
chosen due to low price and versatility. Both have a reasonable amount of torque and are
quite lightweight.
Bearings and couplings were sourced and purchased from Gardner Bearings. The
bearings were chosen so that they could be ‘recycled’ and used in the arm. The couplings
were chosen because of their low price and availability.
The design of the rig (Figure 7-2) is based around the design of the first few links of the
arm. The construction is mainly aluminium, with the shafts made from mild steel for
added strength. Structural analysis is used to make sure the rig would not fail. Final
drawings (See Appendices) for the test rig were handed to the workshop on May 10th and
construction was complete by June 5th.
Page 72 of 274
Figure 7-2: Test rig (Isometric)
7.1.2 Results
The construction of the test rig allows information to be gathered about both motor
control and link construction. It has been discovered that link construction was fairly
simple to implement and that methods used in the design of the test rig worked well and
could be used also on the robot arm. During the testing of motor control it was found that
the motors could be controlled well, however the coupling broke and cut short the testing
time. This was viewed as a benefit as it displayed that the original couplings considered
for the arm would not suffice and new ones could be sourced.
7.2 Arm Design
After the basic design of the arm is complete, including link lengths and simulations of its
operation, the construction begins. The first step in this process is to design the arm in a
CAD program. The program chosen for this was Autodesk Inventor. Inventor allows the
arm to be designed and visualised at the same time. It also allows the arm to be checked
for possible collisions and link interference.
Page 73 of 274
Because each link depends upon the previous link, the design of the arm needs to begin at
the end and finish at the base. Link 6 is therefore the first to be designed, followed by
link 5, and so on. This means that the design process is fairly involved as each link has to
be redesigned several times to ensure it is not too heavy for the other links.
7.2.1 Link 6
Link 6 is a simple design. The bat is constructed from 4 mm PVC and is the ROBAT
regulation size of 125 mm in diameter. Most of the length of link 6 is made up by an
aluminium rod. The rod is fitted into two bearings which allow it to rotate freely. Also
the rod has a ‘+’ cut-out on the motor end which allows it to attach snugly with the servo
motor.
Figure 7-3: Link 6
Page 74 of 274
7.2.2 Link 5
Link 5 shows the position of the servo motor in blue. Also shown is the block in which
the bearings from link 6 fit.
Link 5 contains the hitting motor, shown in green, and it is possible to see how this motor
will actuate link 6 to hit the ball. Again, bearings are used to allow the free movement of
link 5.
Figure 7-4: Link 5
7.2.3 Link 4
Link 4 is similar in design to link 6 but with a few distinct differences. Firstly, there is a
‘U’ piece in the original location of the bat. This ‘U’ piece connects with link 5 and
allows it to rotate on the bearing inserts seen in the picture. What can also be seen is the
coupling, which attaches the motor to the insert. The second difference between link 4
and 6 is that the main rod is hollow. This is done to reduce weight while still maintaining
a rigid link. The end of the rod again is supported by two bearings which allow it to
rotate freely.
Page 75 of 274
Figure 7-5: Link 4
7.2.4 Link 3
Link 3 is the first link to utilise pulleys and belts. It features a steel shaft that is
connected through the use of D-drives to the main structure of link 3. This structure is
then attached to the housing which holds the bearings from link 4. The cut-out in this
bearing housing allows the coupling between link 4 and the motor shown on link 3 to be
tightened.
Figure 7-6: Link 3
Page 76 of 274
7.2.5 Link 2
Link 2 features a parallel plate design. This allows the structure to remain symmetric.
The belt and motor which drives link 3 can be seen as well as the shaft which rotates link
2. Again a pulley and belt are used to drive link 2. A feature of note is the position of the
motor on link 2. It has been moved up as far as possible in order to reduce the inertia
acting on link 2. The belt can be held in tension even when the motor is not attached.
This set up also has slots to allow the belt to be un-tensioned if needed.
Figure 7-7: Link 2
7.2.6 Link 1
Link 1, being the final link is required to take the weight of the entire robot arm. It is
thus important to ensure that it is stable and rigid. The motor on link 1 is again mounted
high in order to reduce inertia. The main feature to note on link 1 is the long shaft it uses
to rotate. This is made deliberately long so that its far end could be secured and reduce
the moment acting upon the near end.
Page 77 of 274
Figure 7-8: Link 1
Page 78 of 274
7.2.7 Total Assembly
Figure 7-9: Total Assembly
The total assembly is shown above and the locations of all the links relative to each other
can be seen. Also, the base board and location of the final motor can be seen. The pose
of the robotic arm is approximately the ready position which the arm will return to
between each shot. It has been defined as the average position of all possible shot
positions.
Page 79 of 274
7.2.8 Motor Selection
As each link is completed, its rotational inertia can be calculated and a motor selected
with sufficient torque and speed to control the link adequately. The next link can then be
finalised and its motor selected and so on.
It was decided that the motor controlling link 6 would be a servo, as its velocity did not
need to be controlled and so would only require position control. The remaining motors
were chosen from Maxon Motors. It was decided to use Maxon Motors as they offered
very high quality motors which came with encoder-gearbox combinations. The motors
selected for the project are shown below along with their characteristics.
Page 80 of 274
Table 7-1: Applicable characteristics of the chosen motors
Gear
Ratio
Max Torque
Continuous
(After gear box)
Torque
Constants
Max
continuous
allowable
current
Max
intermediate
allowable
current
Rotor
Inertia
Motor
One
53 15 Nm 119
mNm/A
944 mA 4.196 A 6.28e-6
Kgm2
Motor
Two
26 7.5 Nm 160
mNm/A
556mA 2.04 6.03e-6
Kgm2
Motor
Three
51 6.0 Nm 160
mNm/A
556mA 2.04 6.03e-6
Kgm2
Motor
four
84 0.8 Nm 8.55
mNm/A
841 mA 2.3A 0.403e-6
Kgm2
Motor
five
84 0.8 Nm 8.55
mNm/A
841 mA 2.3A 0.403e-6
Kgm2
Motor
(servo)
7.2.9 Bearings
Bearings are used in place of other methods of friction reduction like bushes. This is
because the amount of friction within the system severely affects the required torque of
the motors.
Page 81 of 274
Deep groove ball bearings are used due to their simplicity, availability, and low cost.
Deep groove ball bearings do not prevent axial deflection due to their design. However,
when mounted in pairs, the axial deflections ‘push’ against one another and prevent any
deflection.
Bearings were sourced from Gardner Bearings and three types were used in the final
design. Each bearing came with seals to protect against dust. The four 6201 bearings
were recycled from the test rig once it had finished being used. The type and number of
bearings used are shown below.
• 4 x Deep Groove Ball Bearing - 6201
• 6 x Deep Groove Ball Bearing - 6203
• 4 x Deep Groove Ball Bearing - 6204
7.2.10 Pulleys and Belts
Halfway through the design process it became apparent that the inertia on links 3, 2 and 1
were going to be large and there would be difficulty in finding motors powerful enough
to overcome these inertias. It was decided at that point to use a drive system which
would locate the motors away from the points they were driving. The two options for
this were either chains or belts.
Research shows that belts are a better and more lightweight choice. More specifically,
timing belts which could eliminate backlash are the obvious choice. The belts and
pulleys were sourced from Small Parts and Bearings.
Page 82 of 274
The belts are all chosen to be the same pitch, so that the pulleys can be the same size.
This reduces the complexity of the design and means that standard belt sizes can be
chosen at a reduced price.
Figure 7-10: Photo of the pully and belt
7.2.11 Couplings
After breaking the couplings used in the test rig, it is obvious that a different or higher
quality coupling is needed for the robotic arm. The couplings used on the test rig
consisted of two sections which pinned to each shaft and then transmitted torque through
a central rubber spider. The pin attachments did not work well, the rubber spider
produced too much backlash and as had been said the entire coupling eventually broke.
Page 83 of 274
The couplings in the arm are constructed entirely out of one piece. This means they have
very small backlash and are much stronger. The couplings were sourced from RS and the
clamp type was chosen instead of the pin type. This allows the shafts to be better
gripped, which reduces slippage.
Figure 7-11: Photo of the coupling
Page 84 of 274
8 Vision System
8.1 Overview
The objective of the Vision System is to provide the Ball Path Prediction System with
accurate and fast ball trajectory data, such that the Ball Path Prediction System can
provide the most accurate prediction of the location of the ping pong ball at given future
instants in time within the workspace of the robot.
Two Firewire cameras are used to determine the location of the ping pong ball in 3D
space using stereovision. The centre of the ping pong ball is identified in each camera
using colour-based image recognition. By finding the intersection of the two lines
connecting each camera to the ping pong ball, the location of the ping pong ball in 3D
space is found. This process is repeated for each camera frame. The 3D locations of the
ping pong ball are sent to the control computer for shot evaluation.
The Background Research section examines existing methods for extracting useful data
from 2D images to reconstruct information about 3D space. Performance Measurement
(Section 8.2) specifies the methods that are used to determine the performance of the
Vision System, such that an optimal Vision System can be obtained. Locating the Ping
Pong Ball in 2D Space (Section 8.3) details the procedure that is used to determine the
location of a ping pong ball at a given time in a given image. Locating the Ping Pong
Ball in 3D Space (Depth Perception) (Section 8.4) describes the procedure for converting
2D ping pong data from multiple sources into a 3D ping pong ball location. Camera
Placement (Section 8.5) details the requirements for optimal camera placement in the
Vision System. Camera Calibration (Section 8.6) contains the procedure for generating
a mapping between 2D camera pixels and a 3D point in space. Hardware (Section 8.7)
lists the hardware that is currently used for the Vision System. 2D Camera Calibration
Page 85 of 274
Experiment (Section 8.8) contains a detailed description of the 2D camera calibration
process.
8.2 Performance Measurement
In order to determine the performance of the Vision System, the accuracy and processing
time of the Vision System for stationary and moving balls is determined. A ping pong
ball is placed in different positions within the field of view (FOV) of the Vision System.
The actual 3D locations of the ping pong ball match the locations reported by the vision
system. The error in distance measurement increases with distance from the cameras.
The error in the 3D location of the ping pong ball is greatest in the depth-direction.
8.3 Locating the Ping Pong Ball in 2D Space
The location of the ping pong ball and any other objects that are in 3D space are
determined from the 2D camera images. The ping pong ball location at specific points in
time is identified in each 2D image using 2D image processing techniques. Raw 2D
camera images are processed using the error-reduction techniques in Section 8.3.1.
Binary images are monochromatic images that identify ping pong ball pixels from non-
ping pong ball pixels. Binary images are generated using the error-reduction techniques
in Section 8.3.2. These binary images produce ping pong ball trails when a moving ping
pong ball is in the image (Figure 8-1 and Figure 8-2). Ping pong ball locations in 2D
space are then found by using either Centroid Identification or Trail Identification upon
the ping pong ball trails in the binary images.
Page 86 of 274
8.3.1 Error Reduction and Edge Detection Techniques
When images are taken by the cameras, the images contain errors. These errors include
pixel errors, border errors, poor or inconsistent lighting and timing errors. Pixel errors
appear as bright specs in the camera images and are caused by incorrect readings for
individual pixels. To reduce the effects of pixel errors, the camera images are smudged
using the Blur and BlurMedian functions (Table 8-1). The cameras that are used for the
ping pong robot contain significant error in the pixels that are within five pixels of the
camera image border. To reduce the effects of border errors, image processing is only
performed on the portion of the image that is at least five pixels away from the image
boundary. Inconsistent lighting can make the colour rules that identify the ping pong ball
cease to be valid. To reduce the effects of varied lighting conditions across camera
images, the DeContrast function is used. Timing errors are reduced by accurately
analysing the time-stamps of frames. Ignoring these errors would result in the ping pong
ball not being detected, being detected at an incorrect location, or being detected at an
insufficient frequency to perform accurate path prediction. Image processing is
performed at a frame rate of approximately 15 frames per second, and this could be
improved to 30 frames per second if the image-processing algorithm could be profiled on
university computers.
Some edge-detection algorithms have been developed for analysis of camera images.
These algorithms could be used in the future to identify the ping pong ball table, and the
bat of the opponent or the eye-direction of the opponent. In order to determine edges in a
camera image, differentiation and cutoff functions have been created. Differentiation
functions compute the difference in RGB-space of a pixel with its neighbours. Cutoff
functions use various algorithms to determine where edges lay in each image.
Table 8-1 summarises the above 2D Image Processing Functions.
Page 87 of 274
Table 8-1: 2D Image processing functions
Function Name Description
Error Reduction Functions BlurMedian blurs a pixel according to the median of its neighbours
Blur blurs a pixel according to the mean of its neighbours
DeContrast makes dark (shadowed) pixels brighter and bright pixels darker
Differentiation Functions CreateXY creates a horizontal and vertical set of data for independent
differentiation
Differentiate differentiates pixels (sets their value to the difference of themselves
with their neighbours in the vertical or horizontal direction)
DifferentiateRelative differentiations are scaled by the pixels value, making colour
variations in darker regions more enhanced
Scale scales the image to an appropriate level for viewing, since image
operations can cause the RGB values' range to change
Cutoff Functions SumAbsolute sums vertical and horizontal sections into one image
SumSquares sums vertical and squares of horizontal sections into one image
CrossProductCutOff applies a binary image cutoff according to the cross product of the
vertical and horizontal RGB-vectors
DotProductCutOff applies a binary image cutoff according to the dot product of the
vertical and horizontal RGB-vectors
BothProductsCutOff sums the cross product and dot product to provide a binary image
cutoff
Page 88 of 274
8.3.2 Colour-based Binary Image Generation
Colour-based Binary Image Generation identifies ping pong ball pixels as those pixels
that lie within the colour-space of the ping-pong ball. Colour-based Binary Image
Generation is fast and simple to implement, but is prone to lighting and shadow
problems. This method produces lop-sided ball results, when only the correctly lit side of
an object is detected. False matches with similarly coloured objects are occasionally
obtained, so fluoro orange is used for the ping pong ball. The binary image is displayed
on the computer screen to ensure that peculiarly coloured clothing and other objects that
create phantom ball images in the camera feeds are eliminated.
Figure 8-1: Ping Pong Ball Image after 2D image processing (blurring and contrast reduction)
Page 89 of 274
Figure 8-2: Binary image of a ping pong ball
8.3.3 Alternatives to Colour-based Binary Image Generation
An alternative to Colour-based Binary Image Generation is the use of object and feature
identification in the camera images. Object identification based upon shape rules would
have been more computationally intensive than Colour-based Binary Image Generation.
The shape of the ping pong ball changes between a ball shape when it is stationary to a
ball trail when it is moving. Object identification techniques would have to look for
unknown trail shapes. The edge analysis tools in Section 8.3.1 could have been used for
this purpose (Figure 8-3). Therefore, object and feature identification is not used to
identify the ping pong ball.
Page 90 of 274
Figure 8-3: Edge enhanced image of chair, hand, ball and overhead projector
8.3.4 Colour Calibration
The colour calibration program allows the colour values of points in the live camera feed
to be found. The mouse is used to identify the point to be queried for colour (Figure 8-4).
A weighted moving average is used to reduce errors in the live camera feed. By testing
the bright and dark colours on the ping pong ball under the expected range of lighting
conditions, and drawing a line in RGB-space between the two extreme colours, all pixels
that are within a specified distance of the RGB-space line defining the ping-pong are
taken as ping-pong ball pixels.
Page 91 of 274
Figure 8-4: Identifying the colour-space of the ping pong ball
8.3.5 Timing
Timestamps on the camera images are not obtained directly from the cameras. Images
are timestamped by the high-performance multimedia timer on the Vision System
computer. If the time between the image sample and interrupt generation is inconsistent,
a small timing error may be obtained. Timing errors are monitored by listing the
timestamps that are recorded with camera images.
Page 92 of 274
8.3.6 Locating the Ping Pong Ball in 2D using Centroid Identification
The current method of identifying the location of the ping pong ball in an image is to find
the centroid of all pixels that match the colour space of the ping pong ball (Figure 8-5).
Figure 8-5: Centroid of a binary image of a ping pong ball
8.3.7 Locating the Ping Pong Ball in 2D using Trail Identification
Trail identification could be used in the future to provide detailed trajectory information
from each frame. The timestamps of successive frames could be combined with a set of
points along the ping pong ball trail to improve the trajectory approximation of the ping
pong ball. The timing and location of each control point along the trajectory would need
Page 93 of 274
to be estimated according to the ping pong ball physics model. The extra information
provided by trail identification techniques could improve the accuracy of the ping pong
ball path prediction.
8.4 Locating the Ping Pong Ball in 3D Space (Depth Perception)
Stereoscopy, ball size and ball shadow identification can be used for depth perception.
Stereoscopic depth perception is the use of ball position information from multiple 2D
cameras to locate the ping pong ball in 3D space. Ball size can be used to determine the
distance of a ping pong ball from a camera, since the number of pixels in a ping pong ball
on the screen is related to the distance of a ball from the camera. Ball shadows on the
ping pong table can be used to identify the distance of the ball from the camera.
8.4.1 Stereoscopic depth perception
Stereoscopic depth perception combines 2D image information from multiple cameras to
provide a 3D location of the ping pong ball. At each time interval, each 2D camera
provides a 2D location of the ping pong ball (Section 8.3). Each 2D camera pixel
corresponds to a ray in 3D space according to the 3D camera calibration (Section 8.8.2).
The two rays that pass closest to each other are used to provide the 3D location of the
ping pong ball (Figure 8-6). The 3D ping pong ball location is taken as the point that is
midway along the line of shortest distance that joins these two ping pong ball rays.
Page 94 of 274
Figure 8-6: stereoscopic rays providing a 3D ball location
8.5 Camera Placement
The objective of Camera Placement is to choose the best number, location and orientation
of cameras for the Vision System. Good camera placement is required to minimise
image processing requirements, pixel discretisation error and to provide results as quickly
as possible.
Camera Axis
Ping Pong Ball
Stereo-Derived Distance
Page 95 of 274
8.5.1 Unused Field Of View
Unused FOV are regions of the FOV of a camera that pass beyond the relevant playing
area of the ping pong game. If the ping pong ball is in the unused FOV region, the
location of the ping pong ball is not relevant and the unused FOV is wasted. The portion
of unused FOV that is at high resolution should be minimised.
8.5.2 Far-Field Resolution
Far-field resolution and depth-perception are poorer than near-field resolution and depth
perception. As the camera normal vectors associated with each camera pixel propagate
further from the camera, the distance between the extended vectors increases. Therefore,
the error in ping pong ball location increases due to diminishing far-field resolution as the
distance from the camera to the ping pong ball increases. Depth-perception error is
proportional to the square of the distance to an object from the cameras and inversely
proportional to the distance between the cameras (Equation 8.1).
By the similar triangles in Figure 8-7,
stereo
stereo
stereo y
r
dr
dxα
θ
Page 96 of 274
stereo
stereo
y
kr
d
dx2
=∴θ
(8.1)
Figure 8-7: Stereoscopic depth error dx/dθ, with respect to target distance r and camera spacing y
8.5.3 Accuracy Requirements
The error in ping pong ball location should be matched to the error in the Ball Path
Prediction System and response capabilities of the Ping Pong Robot. If the Ball Path
Prediction System is able to provide accurate ball path information using only the start of
the trajectory of the ping pong ball, then the cameras should be positioned close to the
end of the table of the opponent. However, ping pong physics are not completely
modelled by the Ball Path Prediction System and ongoing correction of the trajectory of
the ball is required. As the ball approaches the workspace of the robot, there may be a
point where visual information becomes unnecessary due to inaccurate path prediction
over short distances and the inability of the robot to respond to alterations in the expected
ball path trajectory. These factors have been taken into account when determining how
close the cameras should be placed to the opponent.
Page 97 of 274
8.6 Camera Calibration
8.6.1 2D Camera Calibration
The ping pong ball is located in 3D space from 2D camera images. In order to locate the
ping pong ball in 3D space, each pixel on each camera is associated with a line relative to
the camera direction. This allows pixels in the camera to be correlated with a line in
physical space once the location and orientation of the camera is known. Determination
of the orientation and location of cameras is obtained in the 3D Camera Calibration
Algorithm (Section 8.6.2).
The 2D calibration program generates a vector field for the pixels of a camera based on
two controlled scans of vertical and horizontal lines at known distance offsets from the
monitor. The primary monitor, which is a Liquid Crystal Display (LCD) to avoid refresh
frequency problems, initially displays a square for camera alignment, followed by a black
and white screen to determine which pixels in the image are under control of the
program. A series of vertical and horizontal scans are placed across the screen, scanning
evenly in each direction to reduce any effects of camera pixel latency or LCD screen
pixel latency. If the scans proceed in only one direction, the latency between pixels being
set to a colour and the pixels achieving that colour could result in a systematic offset to
the calibration results. The scans occur slowly to avoid problems with LCD pixel
latency, camera pixel latency and to improve the accuracy of the final calibration results.
Once the scan-sequence is complete, the relative offset of the camera to an arbitrary
origin (typically 0, 0, 0 for the first run) is queried with the Graphical User Interface
(GUI) of the program. At least two scans are required to generate vector field data, taken
at different distances from the screen.
The vector field is created by drawing a line through the 3D points associated with each
pixel from the 2D camera calibration. The relative distance from these 3D points to the
camera, when combined with the known relative distances between the camera and the
Page 98 of 274
screen across runs, provides the required depth scaling factor to generate a vector for
each camera pixel.
Calibration results are saved to disk according to the Globally Unique Identifier (GUID)
of each camera, such that the calibration data is automatically available for use with that
specific camera in the future. The vector fields can also be output into text form for
processing by other programs or for viewing in a spreadsheet. A detailed explanation of
the experimental results from the 2D camera calibration is given in 2D Camera
Calibration Experiment (Section 8.8).
8.6.2 3D Camera Calibration
The objective of the 2D Camera Calibration Algorithm is to map each pixel on each
camera to a normal vector in 3D space relative to the virtual origin of the camera. This
allows pixels in the camera to be correlated with lines in physical space. Determination
of the orientation and location of cameras is obtained in the 3D Camera Calibration
Experiment.
The 3D calibration program asks the user to identify control points across multiple
images from the cameras (Figure 8-8). Control points are identified from the two
cameras and the 3D coordinates of each control point are entered. The 3D calibration
program optimises the roll, pitch, yaw and scale of the cameras to match the control point
data. The optimisation is achieved by determining the virtual space control point
locations with the real control point locations and adjusting the roll, pitch, yaw and scale
of the cameras such that the ratio of all line lengths between control points in virtual
space to their lengths in real space is as close to unity as possible. Roll and pitch are
Page 99 of 274
optimised without requiring the 3D coordinates of the control points. Yaw and scale
required the 3D locations of the control points in order to determine the level of
inwardness or outwardness of the cameras and the distance between the cameras. By
measuring all camera properties directly from the camera imagery, a very accurate 3D
camera model is obtained. This accuracy is essential for stereographic depth calculations,
as errors along the line between the cameras are amplified in the depth direction.
Figure 8-8. 3D control point identification
The correlated pixels must not all lie along one line. As the correlated pixels approach a
linear configuration, the transform becomes poorly defined until the position of the
second camera relative to the first camera becomes a circle in 3D-space, as shown in
Figure 8-9.
Page 100 of 274
Figure 8-9: Singular configurations of correlated lines caused by linear or closely placed points
28 control points that have been identified in the fields of view of the two ping pong
cameras are displayed in virtual space in Figure 8-10 and Figure 8-11.
Linear object with 3
correlated points
Linear correlation points cause camera
locations A and B to give the same 3D
transform with respect to reference
camera C
A
B
Reference camera C
Page 101 of 274
Figure 8-10. 3D calibration control points in virtual space (front view)
Figure 8-11. 3D calibration control points in virtual space (top view)
Page 102 of 274
The ping pong ball position is determined after the cameras are fully calibrated, and this
can be used for path-prediction and shot generation by the ping pong robot.
The 3D Calibration Algorithm determines the positions and orientations of the cameras in
physical space. The relative camera positions and orientations in 3D space are combined
with the 2D Camera Calibration Algorithm results in order to determine 3D locations of
the ping pong ball (Figure 8-12).
Figure 8-12. 3D calibration results
The camera transform is not expected to change between calibration and use. If the
cameras are moved or rotated with respect to each other, either the cameras need to be
recalibrated or adaptive error minimisation (see Section 8.6.3) is required.
Page 103 of 274
In order to determine the position of the ping pong ball, a line is drawn from each camera
in the direction of the ping pong ball. In theory, these lines should intersect, however,
due to timing, colouration, rounding and calibration error, the two lines do not intersect in
the general case. To approximate the point of intersection of two lines that almost
intersect, the shortest line joining them is found and the centre of this joining line is used
as the location of the ping pong ball (Figure 8-13).
Figure 8-13: Minimization of distance between correlated lines
8.6.3 Adaptive Error Minimisation
Adaptive error minimisation between ping pong ball tracking points could be used to
compensate for changes in the camera positions and orientations due to unexpected
rotations or translations of the cameras with respect to each other. Adaptive error
minimisation is the use of correlated points during operation, rather than during
Camera Axis
Shortest Distance Between
Correlated Skew Lines in 3D
Page 104 of 274
calibration, to update the relative camera transform. The error distance between
stereoscopic lines can be minimised by slightly modifying the 3D camera transform.
Adaptive error minimisation could be used on stationary reference points in the FOV of
all cameras. By using controlled, stationary reference points under controlled lighting
conditions, the problems with accurate centroid identification and timing errors can be
eliminated from the adaptive error minimisation problem. The reference points would
ideally be Light Emitting Diodes (LEDs) in order to prevent shadow from obscuring the
reference points. The use of LEDs allows phantom LEDs (pixels that have colours that
are similar to the reference LEDs) to be easily removed by controlling the on/off state of
the reference LEDs. Periodically or intermittently flashing LEDs could distract human or
robotic opponents, so LED operation could be performed between rallies or IR LEDS
could be used.
Measures should be taken to prevent occlusion of the LEDs from inhibiting adaptive
error minimisation. Redundant LEDs can be used such that if some LEDs are occluded,
the remaining LEDs can be used as reference points. A method for notifying users of
LED occlusion should be implemented. LED occlusion warnings should start with a
warning pattern in the occluded LED. If enough LEDs are occluded in order to prevent
adaptive error minimisation from occurring for a minimum length of time, a visual
message on the GUI of the robot or other output devices should be generated. An audio
warning of increasing intensity could be used if the visual warnings are ignored for a
minimum length of time. If the robot has not played any shots for a minimum length of
time, the audio warnings should not be generated.
Although adaptive error minimisation can eliminate errors due to unexpected rotations or
translations of the cameras with respect to each other, this method requires an accurate
correlation between points in space and time across different cameras. Lighting
conditions can cause the tracking points of the ping pong ball to appear at different
Page 105 of 274
locations on the ping pong ball across images. Small timing errors could also cause the
correlated points across images to have large differences between cameras. The errors in
correlation of ping pong ball tracking points may in fact be larger than any changes in the
3D camera transforms that adaptive error minimisation may seek to correct. The Vision
System performance (Section 8.2) with various scaling factors could be compared with
the Vision System performance with static 3D camera calibration. The transform
determination method that provides the best Vision System performance should be used
for the Ping Pong Robot Vision System.
8.7 Hardware
The Windows operating system is required for DirectX. DirectX dependencies exist in
the camera image acquisition system. Therefore, Windows XP is used as the operating
system for the Vision System.
2 Fire-I Digital Firewire Cameras, Cables and Firewire Port are used. Firewire cameras
allow higher data transfer than USB devices and require less processing time from the
computer. The cameras used in this project allow a maximum frame rate of 30 frames
per second at 320×240 resolution. A high frame rate and sufficient resolution are critical
for locating the ping pong ball quickly and accurately.
Page 106 of 274
8.8 2D Camera Calibration Experiment
8.8.1 Procedure
For each camera that needs to be calibrated, the 2D Camera Calibration is run at two
separate distances from the computer LCD screen. The LCD screen is placed directly in
front of the camera. Vision output from the camera is displayed on the auxilliary
monitor. The alignment of the camera is verified by ensuring the test pattern is aligned in
the secondary monitor, as shown in Figure 8-14. The calibration procedure is then
performed. Firstly, the screen is made black, then white, to see which pixels in the field
of view of the camera are included in the primary monitor. Secondly, a series of
horizontal and vertical black and white rectangles are presented to the camera at 1 pixel
intervals. The cutoff between black and white is recorded and used to create a correlation
between screen and camera pixels. After the calibration run, the field of view is
displayed on the screen with a crosshair. The secondary monitor is used to verify the
field of view results (Figure 8-15). The position coordinates of the camera are entered
and the procedure is repeated at a different distance from the screen. The procedure is
repeated for both cameras. The results are recorded in a file as a vector and error field for
each camera pixel for each camera that is tested.
In Figure 8-14 and Figure 8-15, the left screen holds the display that the camera is
viewing and the right screen holds the resulting image that the camera detects.
Page 107 of 274
Figure 8-14: Alignment test pattern prior to calibration
Figure 8-15: Camera calibration raw FOV results
Camera View
Camera View
Page 108 of 274
8.8.2 Camera Vector Field Results
The final result of the 2D Camera Calibration is the generation of a normal vector for
each camera pixel, such that objects in the 2D space of the camera can be projected along
a line in 3D space. The visualisation mechanism for the camera vector fields is a
program that allows the user to inspect the vector field results using a joystick. Cyan
lines are drawn between adjacent pixels in the camera calibration results in order to view
the degree of linearity of each row and column in the vector field. Green lines are drawn
between the origin of the camera and the corner points that are taken five pixels in from
the actual camera corners, since the cameras used in this project are unreliable in the edge
region. The red line is the centreline of the camera (Figure 8-16). Figure 8-17 and
Figure 8-18 detail zoomed in views of the camera calibration data for error visualisation.
Figure 8-16: Camera field of view vector field
Page 109 of 274
Figure 8-17: Typical midrange per-pixel error before smoothing
Page 110 of 274
Figure 8-18: Wide viewing range 2D camera calibration results
8.8.3 Discussion
The 2D Camera Calibration results produce vectors on the interior of the camera that
appear to line up with reasonable accuracy (Figure 8-18). There are still some deviations
from straight lines along each row and column of the camera calibration. The edge
results display significant errors (Figure 8-19 and Figure 8-20) due to the right side of the
screen becoming bright when the top portion of the screen is exposed to light and the
bottom portion of the screen is kept dark. Therefore, the edge results are ignored.
Page 111 of 274
Figure 8-19: Edge saturation along right side of screen
Figure 8-20: 2D Camera calibration edge errors caused by edge saturation
Monitor Output Camera Image Edge Saturation
Page 112 of 274
8.9 Conclusion
The vision system of the ping pong robot transmits the location of the ping pong ball to
the control computer quickly and accurately. The desired performance requirements of
accurate positioning and high frame rates have been achieved by using DirectX and C++.
The ping pong robot vision system can be used for high precision 3D stereoscopic
imaging applications. The 2D and 3D calibration programs have generated the 3D line
corresponding to each pixel in each camera independently from experimental data. This
accuracy is essential for stereoscopic imaging applications, as small errors in the camera
data can have large errors in the distance observed by the stereoscopic imaging system.
Edge analysis tools have been developed for more complex analysis of the scene in the
future.
Page 113 of 274
9 Control Hardware
9.1 Microcontroller
The robot system is centred in two parts, the host controller and the client robot arm. For
immediate responsiveness, the robot arm requires a microcontroller which can directly
drive each motor and sensor. This microcontroller receives high-level movement
commands from the host, and break down these commands into the more specific
requirements for each motor.
The microcontroller considered for this task is the Freescale MC9S912DP256, which has
a wide range of functionality, of which much is required. The Wytec Dragon 12
development board (Figure 9-1) was used as it was inexpensive and made utilising the
basic functions of the microcontroller very simple.
Figure 9-1: Wytec Dragon 12 Development Board
Page 114 of 274
9.2 Output
The system consists of six motors. Of these, five are simple DC motors with an attached
encoder. The second to last joint in the system is less restricted by the timeframe of the
shot than the other joints, and this joint is controlled by a servo motor for simplicity.
Each of the five DC motors are controlled by a current controller, which attempts to
regulate the maximum current through the motor to a set level. This level is controlled by
the microcontroller via an analog signal.
The self-contained servo motor is controlled by a square wave, which represents the
desired position of the motor.
There are several options for direct control of the motor. Since the project is
implemented using DC motors, the microcontroller is required to control either the speed
or position of the motors. In this area, the option of direct voltage control was considered
for the control of speed. However, as the control system required direct control of the
torque of each motor, a more different method of control is required.
For this, a full-bridge motor driver, the LMD18245 (Figure 9-2), from National
Semiconductor, is used. This allows the microcontroller to simply output a required
torque, which the driver then sought to achieve. The microcontroller sets a desired torque
D across M4, M3, M2, and M1, and a desired analog multiplier V through DAC REF,
and the driver attempts to provide a current to the motor of up to 16/VD× (scaled to
some extent by the electronics connected to the driver).
Page 115 of 274
Figure 9-2: The LMD18245 Full-Bridge Amplifier
The DAC REF analog input is created by using the PWM output from the
microcontroller, and passing it through a low-pass filter in order to create an analog
signal between 0 and 5V.
The frequency of the PWM output is set to 5.88kHz. The simplest low pass filter for this
task is constructed using a small resistor and capacitor to filter the low frequencies, as
shown in Figure 9-3.
Figure 9-3: Simple low-pass filter
By relating the cutoff frequency f to the time constant:
RCf
ππτ 2
1
2
1==
R
PWM
input
C
Analog
(filtered)
output
Page 116 of 274
Suitable values for R and C can be found. By passing a sine wave emulating PWM signal
through this filter, it was found that reasonable values for R and C were quite small, and
allowed for fast response for changes in the PWM intensity. The values chosen were:
Ω=1R
FC µ7.4=
A drawback in this method of motor control is that the current controller does not
perform as specified under our conditions. The current output appears to be nonlinear
with respect to the input, and this makes controlling the motor reliably very difficult. The
results of a short test follow:
Current limits for given input D
0
100
200
300
400
500
600
700
0 2 4 6 8 10 12 14 16
Current controller input D
Current output limit (mA)
Figure 9-4: Current limits for given D input, for a loaded motor and max DAC Ref
Page 117 of 274
9.3 Input
Figure 9-5: The HRPG Series Encoder
Each DC motor will be accompanied by an optical encoder, used to read position data
and convert this data to a rotational velocity. For testing, the HRPG series from Agilent
Technologies was used (Figure 9-5). The encoder has a resolution of 64 pulses per
encoder line. This allows for a maximum resolution of 256464 =× . The encoder
algorithm was tested using a tachometer, and found to be accurate. The results are shown
in
Page 118 of 274
Table 9-1. Note that at this point any inaccuracies are likely caused by the low resolution
per revolution of the encoder to the motor shaft. The encoders attached to the purchased
DC motors have a resolution of 100 pulses per revolution, or 400 ticks. When taking into
account the gearing of the motors, and the fact that the encoders are coupled directly to
the motor, the encoder resolution is much higher.
Page 119 of 274
Table 9-1: Encoder test results
Motor voltage (V) Motor actual speed (rpm) Motor recorded speed (rpm)
12 76.8 77.05 to 75.98
11 70.4 69.56 to 70.63
10 63.8 63.14 to -64.21
9 57.4 56.72
8 50.3 49.22 to 50.29
7 43.4 42.8
9.4 Software
The microcontroller software is required to perform a number of primary functions:
• Read the changes in encoder values
• Send the changes in encoder values to the host computer
• Retrieve the required torque values from the host computer
• Update the motors to the required torque values
There exist several methods for each of these tasks.
9.4.1 Encoder Reading
Standard optical encoders operate by producing, on turning, two separate square waves
on two separate channels. These waves are slightly out of phase, and this allows for
direction and angle to be discerned.
Page 120 of 274
Figure 9-6: Possible encoder output on encoder rotation in the clockwise direction
Looking at Figure 9-6, the state at point 1B is observed. If the next state is 1C, with
channel A being high, and B being low, the encoder has moved in a clockwise direction.
However if channel A becomes low, then the encoder has moved in an anticlockwise
direction.
Using the Freescale MC9S12DP256 microcontroller, there are several options for
tracking the encoder changes. The first, and simplest, is polling the encoders. This
involves frequently checking the values of the encoder channels for changes. However,
even with the test encoders, which have a resolution maximum of 256, the
microcontroller would be required to spend much of its processor time reading the port
values, and there is a high probability of data loss and incorrect interpretation.
For example, in Figure 9-6, ideally, the microcontroller would be required to poll the
encoder data at 1A, 1B, and 1C (while moving clockwise). However, if the encoder
speed increases, or the rate of this data polling is not high enough, then it is possible that
the encoder data will be read at 1A and 1C, and at this point it is impossible to discern
whether the encoder has moved clockwise or anticlockwise.
Channel A
Channel B
1A 1B 1C 2A 2B 2C
Page 121 of 274
Another, more practical solution, is to allow for interrupts for each of these channels.
The interrupt routines occur on the rising or falling edge of both of the channels, such as
in 2A, 2B or 2C in Figure 9-6. The downside to this is that for a high turn rate or a high
resolution of encoder, the interrupt is required to occur very frequently, and each
interrupt routine requires processor time.
Another solution is to use the enhanced capture timer available on the Freescale
MC9S12DP256. Using this timer it is possible to capture a number of rising edges as
they occur on either or both channels, and this occurs without constant program attention.
However, the capture timer stores the captured edges in four separate 8 bit registers,
which allows for only a maximum of 255 edges. This is likely to not be high enough for
the task required. Furthermore, the microcontroller has only four registers for this task,
and the number of encoders mounted on the robot is five, so another solution needs to be
used for at least this extra encoder.
The final solution decided upon was the interrupt driven encoder reader. The interrupt
routine occurs on two ports, port J and port H. This allows for five encoders to be
connected, and is expandable to six encoders. The interrupt occurs on both rising and
falling edges on both channels, allowing for the maximum resolution of each encoder to
be utilised. For five encoders, this can be processor intensive, and the program can be
modified to allow the interrupt to only occur, for example, on the rising edge of one
channel, in order to reduce the resolution by a factor of four.
Page 122 of 274
9.4.2 Motor Update
9.4.2.1 DC Motors
The motors are moved by sending an analog signal to the current controller of each
motor. The immediate solution to creating this analog signal is to produce it with the
Digital to Analog Converters (DAC) readily available on the Dragon 12 board. These
DACs use the Inter-Integrated Circuit bus (IIC) on the microcontroller. Using the IIC
involves transferring several packets of data at set intervals and minor communication
between the DAC modules.
There exist a few problems with this solution. Upon testing it was found that using the
IIC leaves the program prone to long busy waits. These waits can be eliminated by
enabling an interrupt routine on the IIC bus. However, every time each DAC is updated,
it requires five separate sequential interrupts to occur, with new data being sent to the
DAC on each routine. If five DACs are used, the time taken to update each of the motors
is significantly large. Another concern is that the board only has two DACs connected,
and this number was normally only expandable to four. Since five motors are required in
the model, another solution is used.
By passing a Pulse Width Modulation (PWM) through a low pass filter, it is possible to
create an analog signal between 0 and 5V. An advantage of this method is that the PWM
signal is generated “behind the scenes” and does not require the constant attention of the
program. It also allows eight separate 8bit PWM channels with a resolution of 255, or
four 16bit channels with a resolution of 65535. As five motors are required to be
controlled this way, five separate 8 bit channels were used.
Page 123 of 274
9.4.2.2 Servo Motor
The servo motor chosen, created by Hitec, is controlled by a square wave that represents
the position of the motor, as shown in Figure 9-7.
Figure 9-7: Servo motor signal behaviour
The signal is 20ms long, and is high for 0.9ms to 2.1ms, depending on the required
position of the motor. Using an 8bit PWM channel and a 20ms wave, the resolution of
the 0.9ms to 2.1ms region is:
15256
20)9.01.2( =÷−
The servo motor has a movement range of 180°. If this method were used, the motor
would be able to move in steps of 12°. However, using a 16bit PWM channel and a
20ms wave, the resolution of the 0.9ms to 2.1ms region is:
393265536
20)9.01.2( =÷−
0.9ms
2.1ms
1.5ms
Servo full clockwise position
Servo full counter clockwise position
Servo at neutral
20ms
Page 124 of 274
This allows for observably smoother movement over this 180° range, in steps of around
0.05°.
9.4.3 Serial Communication
The communication involves sending constructed packets of 16 bytes across a serial
interface. Sending this data continually without the use of interrupts takes considerable
processor time. The serial interface used in this project is implemented by adapting an
interrupt driven serial interface written by Dr Frank Wornle. This allows for the serial
communication device to be attended to only when necessary.
9.4.4 Software Behaviour
The software is entirely interrupt driven. At start up, the program initialises all
components of hardware and software, and enables all necessary interrupts. While idle,
the program also updates the display and toggles the heartbeat LED twice a second.
Page 125 of 274
Figure 9-8: Simplified state diagram of the microcontroller software
Shown in Figure 9-8 is a simplified state diagram of the software. By default, the
MC9S12DP256 does not allow nested interrupts. This means that if an interrupt,
regardless of priority, occurs during another interrupt routine, it will not be serviced until
the current routine has completed. This behaviour is shown in Figure 9-9.
Figure 9-9: Non-maskable interrupts
Main (idle) Update display Low priority tasks
low priority high priority medium priority
Timer tick Update software timers
Comm processing Receive/send data Decode received data Update required currents
Encoder interrupt Process encoder state Update joint positions
[Timer interrupt]
[Return]
[Data received or transmission ready]
[Return]
[Return]
[Encoder movement]
[Return]
[Encoder movement]
[Return]
[Encoder movement]
[Return]
Page 126 of 274
As can be seen, it is possible for an interrupt to not be serviced for a long period of time
if an existing interrupt is being serviced. If the interrupt being forced to wait is the
encoder interrupt, then it is possible that encoder data will be lost before it is attended to.
Because of the high rate of data processing related to the encoders, the system is required
to have maskable interrupts. This allows some interrupts to stop others and be attended
to if necessary. Using a customised priority scheme, it is possible to alter which
interrupts have a higher priority and are capable of interruption. This system is shown in
Figure 9-10.
Figure 9-10: Maskable interrupts
The timer interrupt occurs around every 1.73ms, and updates the software timers, such as
the timer used for toggling the heartbeat LED, and the timer used for the communications
timeout.
The communications interrupt occurs every time the microcontroller receives a byte of
data from the serial connection, or when the serial connection is ready to be used while
Page 127 of 274
data needs to be sent. During this sequence, all data is retrieved from the serial
connection, and decoded if necessary. If necessary, the motors are updated to their new
torque values. After a successful command has been received, the program then
establishes the return data sequence, where interrupts occur when the serial connection is
ready to send data. This involves sending the encoder change data back to the control
system, and clearing the encoder data.
The encoder interrupt occurs frequently, on every rising or falling edge on all encoder
channels. When the interrupt occurs, it quickly stores the interrupt data and encoder
channel data, and then allows more encoder interrupts to occur while processing data.
This way, if the encoder interrupts occurs faster than the data is processed, excess data
can be placed on the data and dealt with when more processor time is available.
The encoder interrupt is the highest priority, and is capable of interrupting all points of
the program at any point, with the exception of some critical sections of the
communications routine.
The encoder on the second to final motor alone generates 32000 encoder interrupts per
revolution of the motor, and as such the interrupts need to be dealt with quickly and
efficiently.
Page 128 of 274
10 Communication
The communication between the three systems is achieved by two serial connections.
Data is sent from the vision system to the control system regarding the position of the
ball. Data is sent from the control system to the microcontroller regarding the required
action of the motors, and the microcontroller returns the position of the encoders.
The data sent from the vision system to the control system takes the following format:
x y z Timestamp
Type 32-bit float 32-bit float 32-bit float 64-bit timestamp
Units metres metres metres Time intervals since
1 January 1970
Table 10-1: Data packet sent from vision system to control system
The vision system requires no return information from the control computer.
The data sent from the control system to the microcontroller takes the format shown in
Table 10-2. The packet involves a start word, five numbers representing the torques to
each of the motors, one number representing the position of the servo motor, and an end
word.
Start 1
st Motor 2
nd Motor 3
rd Motor 4
th Motor 5
th Motor Servo
Motor
End
Hex 7F 7E XX XX XX XX XX XX XX XX XX XX XX XX 7D 7C
Dec 32638 -255 to
255
-255 to
255
-255 to
255
-255 to
255
-255 to
255
0 to 2317 32124
Table 10-2: Data packet from control system to microcontroller
The data returned from the microcontroller takes the format shown in Table 10-3. The
packet involves a start word, five numbers representing the change in position of each of
the encoders since the last sent packet, one unused word, and an end word.
Page 129 of 274
Start 1
st Encoder 2
nd
Encoder
3rd
Encoder
4th
Encoder
5th
Encoder
Not used End
Hex 7F 7E XX XX XX XX XX XX XX XX XX XX 00 00 7D 7C
Dec 32638 -32768 to
32767
-32768 to
32767
-32768 to
32767
-32768 to
32767
-32768 to
32767
0 32124
Table 10-3: Data packet sent from microcontroller to control system
The control system initialises the connection by sending a data packet, which also gives
the motors their initial values. The data from the microcontroller is immediately returned
when a packet is received. This transfer of data occurs at around 300Hz, in order to
ensure a smooth transition over the entire trajectory of the arm. Furthermore, this data is
not time stamped, so it is important that the data be dealt with as soon as it is received.
Page 130 of 274
11 Cost Analysis
During the initial research of the project a small amount of time was spent on cost
analysis. It was discovered that the funds provided by the University for a four person
project would not be sufficient, and so other sources of funding were sought. The project
supervisor was able to secure additional funds, which allowed the project to appear
viable.
Also during the initial research of the project it was decided that high precision motors
from a supplier such as Maxon Motors would be required. Quotes for prices on these
motors have been sought, but were not obtained so estimates for these will be used.
All items required for the test rig have been purchased, and so provide an adequate guide
for estimates on the costing of the robot arm. Two cameras were purchased and have
proved sufficient for the project needs. Additional cameras are expected to be required at
the same cost as the originals.
At the current time no costing has been applied to the control section and this will likely
occur much later in the project after the arm has been built and vision system completed.
Page 131 of 274
Table 11-1: Cost Analysis
ITEM QTY. COST EA. COST Bearings: 6201 6 $6.17 $37.02 6203 6 $8.59 $51.54 6204 4 $12.55 $50.20 Couplings: Test rig 2 $18.50 $37.00 Robot arm 2 $39.80 $79.60 Pulleys: 6 $27.52 $165.12 Belts: B-T5-065 1 $24.35 $24.35 B-T5-084 1 $27.05 $27.05 B-T5-128 1 $32.74 $32.74 Motors: Test rig 1 $70.00 $70.00 Motor 1 1 $675.00 $675.00 Motor 2 1 $650.00 $650.00 Motor 3 1 $650.00 $650.00 Motor 4 1 $265.00 $265.00 Motor 5 1 $265.00 $265.00 Motor 6 1 $29.90 $29.90 Cameras: 2 $200.00 $400.00 Encoders: 5 $14.20 $71.00 Printing/Admin: 1 $150.00 $150.00 Test Rig: 1 $30.00 $30.00 Robot Arm: 1 $500.00 $500.00 Total: $4,260.52
Page 132 of 274
12 Conclusion
The goal of this project was to design and implement a robot that was capable of playing
the game of table tennis. A number of distinct systems including the physical design of
the robot, drive system, vision and robot control were developed. The separate
components were then integrated via a serial communication scheme in order to produce
an overall functional system.
The final arm design is capable to reaching all positions within its 3D workspace. In
addition to this, it is able to reach all these positions with angles of up to 40o in all
directions. The motors selected are capable of driving the arm to the required positions
within the specified shot time of 0.5 seconds and with sufficient velocity to hit the ball
back. Overall, the robotic arm designed is a complete working unit which meets all the
requirements of the project.
The robot arm was completely defined mathematically so that each link could be
coordinated to achieve the shot trajectories necessary to play table tennis. Based on the
kinematic model of the arm, the full arm dynamics were specified. The control system
implements online calculations of the robot dynamics to produce a control system that
takes into account the arms current configuration and motion. This form of control allows
the difficult problem of non-linear serial manipulator control to be realized. The overall
control program takes in position information from the vision system, plans trajectories
for each degree of freedom and controls the robot along that trajectory. Additionally, the
control system can be easily applied to any configuration of serial manipulator, providing
a physical model of the arm is available.
Page 133 of 274
The vision system is able to correctly identify ping pong balls from 2D images and
correlate these positions with a line in space relative to a camera. 3D calibration of the
cameras and optimisation of the camera placement and calibration data determine ping
pong ball locations in 3D space. Ping pong trail identification techniques could be
implemented in the future to improve the resolution of the ping pong position data that is
provided to the ping pong ball path prediction system. Analysis of camera imagery for
additional information such as the table, robot arm, and opponent’s bat may be performed
in the future.
The hardware is capable of all required tasks, however the current controllers are unable
to actuate the current values in a way which is suitable to the control system. If the
current drivers were reliable and capable of producing the required currents in a
predictable way, the control system would perform as predicted. Either the behaviour of
these current controllers needs to be investigated more thoroughly, or another more
complex method of regulating current needs to be found.
Page 134 of 274
13 Appendices
Appendix A - Official Rules of ‘Robat’
1) The ball is standard table-tennis ball, with no special markings.
2) The table is 2 meters long and 0.5 meters wide. It stands 0.75 meters above floor
level.
3) The table surface is smooth and matte black without boundary lines.
4) At each end of the table is a vertical “playing frame”, internal size 0.5 meters
square. The boundary of these frames will be formed by a wire thread which
carries the edge of a fine net, supported on a rigid outer frame 1 meter wide and
0.75 meters high, thus minimising optical obstruction.
5) In the centre of the table a third vertical frame is mounted internal measurements
0.5 meters wide and 0.75 meters high, of a material similar to playing frames. A
fine wire is stretched across this frame 0.25 meters above the table, supporting a
transparent net (similar to hair-net material). Nets will not obstruct more than ten
percent of light passing through them – probably much less. The outer supporting
frame will be 1 meter wide, but the top cross member is rigid at a height of 0.75
meters above the table.
6) The top of the net frame supports a serving device. This holds the ball in full view
of both robots, with centre about 0.625 meters above the centre of the table
surface. The structure is of wire not more than one millimetre thick, and after
serving the ball the mechanism retracts entirely above the level of the top net
Page 135 of 274
frame. The ball is served towards the “serving” robot, to bounce once before
emerging from the playing frame.
7) Lighting is provided by tungsten lights, mounted at a height of 2 meters on poles
at the corners of a 4 meter square and centred on the table. The light level is
likely to be around a Weston meter reading of 10, corresponding to an exposure
of 1/60 second at f5.6 on 100 ASA film.
8) A level space one meter square is provided abutting each end of the table within
which each robot must stand. No part of the robot must touch or project forward
of the playing frame. The robot should not extend laterally more than one meter
to either side of the frame centre, i.e. 2 meters overall. A power outlet will be
provided at the edge of each standing space. In Europe this will be 220-240 volts
at 50 Hz, fused at 5 amps; in the USA it may be 110 volts, 60 Hz, fused at 10
amps.
9) A moveable vertical black sight-screen 1.5 meters wide and 2 meters high will be
located behind each robot, 2 meters behind the playing frame.
10) The bat size must be contained within a circle 12.5 centimetres in diameter. The
bat must propel the ball by hitting it once with its surface – no catching, blowing,
electrostatic repulsion or other variations allowed. The bat surface can be curved
if desired, but double-hitting will loose the point.
11) Those parts of the robot visible to the opponent must be black, including
absorption of infra-red in the region of one micron wavelength. This is satisfied
by black emulsion paint. If the opponent insists, and can show that he has sensors
Page 136 of 274
to detect it (unlikely in the first year), the bat must carry a high-brightness red
LED at its centre and a green LED 5 cm away from it.
12) Apart from such LED’s, the robot must not project light towards its opponent. To
detect the approach of the ball to the bat, a cross beam can be used. It must then
be clear that any light spilled towards the opponent will come only from the ball
itself, and unreasonable brightness levels must not be used.
13) Ultrasonic transmissions are only allowed while the ball is approaching the bat,
and must cease on contact. When ultrasonic transmission is used, a hight-
brightness red LED must be driven by a cable long enough to permit mounting
beside the net frame, where it can be viewed by the judges and by a cable-
mounted photocell from the opponent – an exception to rule 8b. It must be lit
while transmitting. This rule is relaxed if the opponent is non-acoustic.
14) The robots will be allowed fifteen seconds to lock their vision systems onto the
ball before it is served. It is desirable but not compulsory that they indicate when
they are ready (by tone or voice output) so that this time can be shortened. Five
serves will be made in each direction. The scoring will be as in table tennis. The
competitors may opt to change ends between games, but this must be
accomplished within a time of five minutes. Initial setting-up time should also be
achieved within a time of five minutes. The number of games to determine a
result will be at least best-of-three, and will be determined beforehand in response
to the number of competitors.
15) A correct return will cause the ball to bounce just once on the table at the
opponent’s side of the net, before it passes through the opponent’s playing frame.
The ball may touch the playing frame, the net wire, or the net frame.
Page 137 of 274
16) If the defender returns the ball 20 times in one rally, it wins the point.
17) The judges may disqualify a robot on the grounds of safety, or penalize it for
serious breaches of sportsmanship.
18) All dimensions quoted here may be subject to a tolerance of 2 percent up or down.
19) The robot may have two buttons or their equivalent with which the handler can
inform the robot that it has won or lost the point. A further button can tell the
robot that the ball has gone out of play, or is ready to be served. Excessive
controls which give the judges the impression that strategy is being determined by
the handler, rather than the robot, will be looked on with disfavour and may lead
to penalty points being awarded.
20) The robots should be easily transportable, and should be entertaining where
possible. They should not be excessively noisy.
Page 138 of 274
Appendix B - Project Proposal (excerpt)
Aims/Assumptions
The major aim is to build a robot that can detect a ping-pong ball and move a bat into a
position to make contact with the ball. Real time shot planning will not be attempted. It
is expected that the robot will be able to make one shot and then return to a ready
position. The rules and specifications for the table size, ball speed, etc, will be taken as
close as possible from the original project. The dynamics of the ball are not a high
priority, so we will not have the ball bounce for this project.
Page 139 of 274
Appendix C - Test Rig Drawings
Page 140 of 274
Page 141 of 274
Page 142 of 274
Page 143 of 274
Page 144 of 274
Page 145 of 274
Page 146 of 274
Appendix D - Main Robot Class for Control Program
Robot.h #pragma once #include "Matrix.h" #include "Serial.h" #include "Trajectory.h" #include <afxtempl.h> #include "PingPongSerial.h" class JointState public: union RFLOAT actual[3]; struct RFLOAT q, qd, qdd; ; ; RFLOAT desired[3]; RFLOAT desiredTorques; RFLOAT endAngle; RFLOAT maxTorque; // maxTorque RFLOAT actualTorque; // actualTorque RFLOAT readypos; RFLOAT gearRatios; RFLOAT premovepos; RFLOAT posGain; RFLOAT velGain; MVector rPos; MVector rc; RFLOAT torqueConstants; RFLOAT currentLimits; RFLOAT rotorInertia; RFLOAT initialPos; RFLOAT readPos; int sliderValue; // RFLOAT T[12]; // rPos and R handle this RFLOAT R[9]; // RFLOAT invR[9]; RFLOAT I[9]; RFLOAT coulomb; RFLOAT viscous;
Page 147 of 274
RFLOAT linkmass; int encoder; // int prevEncoderDiff; ; class Robot public: enum SHOTMODE SETUP, READY, INSHOT, ENDSHOT, NUMSHOTMODE, BREAKMODE, RETURNSHOT, SHUTDOWN, SAFE, ; private: PingPongSerial Jarrad; Serial serial; // microcontroller SHOTMODE mode; Serial visionSerial; RFLOAT shotcount; int linknumber; MVector z0; RFLOAT check[6]; RFLOAT check2[6]; __int16 desiredCurrents[6]; RFLOAT DH[6][4]; MVector g; __int64 startCountOverall; __int64 startCount; RFLOAT CurrentShotTime; RFLOAT OverallShotTime; RFLOAT TotalShotTime; __int64 oldCount; __int64 timeOutOld; RFLOAT timerPeriod; CArray<Trajectory*> JointTrajectories; //CArray<JointState> state; RFLOAT* shotAngles; CArray<CArray<RFLOAT*>*> shotPositions;
Page 148 of 274
public: CArray<JointState> state; int col, row; int number; BOOL first; float testinput; RFLOAT TotalShotTimeint; BOOL Emerg; BOOL READYTOPLAY; BOOL TURNOFF; int horpos; int vertpos; bool shotfound; public: Robot(int linknumber, RFLOAT* mass,RFLOAT DHinit[6][4], RFLOAT rcinit[6][3],RFLOAT inertiainit[6][6],RFLOAT gravity[3], RFLOAT* coulomb, RFLOAT* viscous,RFLOAT* posGains, RFLOAT* velGains,RFLOAT* torqueConstants,RFLOAT* gearRatios,RFLOAT* currentLimits,RFLOAT* rotorInertia,RFLOAT* initialPos,RFLOAT* readPos); virtual ~Robot(void); public: CString GetModeString(); void SetMode(SHOTMODE modeset); SHOTMODE GetMode(); // to use Shotmode outside ROBOT::SHOTMODE void InitialSetup(); void ChangeTrajectory(); __int64 ResetTimeCount(); float StartingMultiplier(int currentlink); void UpdateDesiredTrajectory(); RFLOAT GetShotTime(); RFLOAT GetOverallShotTime(); void NewtonEuler(); // NewtonEuler takes approximately 96 microseconds to run ( now 16.06 microseconds ) Serial GetSerial(); // now 2.907 microseconds with optimisations on!!! // now 1.465 microseconds with optimisations on!!! void SetInitialPos(RFLOAT* inpos); const JointState* GetState(int i); void UpdateState(); int GetLinkNum(); //BOOL UpdatePhysicalState(); BOOL UpdateMotors(); BOOL RequestEncoders(); BOOL UpdateRealMotors(); BOOL RequestRealEncoders(); //RFLOAT GetDesiredTorque(int index); RFLOAT GetDesiredCurrent(int index); BOOL UpdatePretendMotors(); BOOL RequestPretendEncoders(); void ConvertCurrents(RFLOAT* retC);
Page 149 of 274
void GetCurrents(RFLOAT* retCurrents); RFLOAT GetTime(); RFLOAT GetTimeOut(); static RFLOAT sign(RFLOAT number) if (number > 0) return 1.0; if (number == 0) return 0; return -1; void GetJointAngles(RFLOAT* rval,int* shotcoord); void ReadShotSelect(); void ReadShotSelect2(); BOOL GetCamera(int* rval); ;
Robot.cpp #include "stdafx.h" #include ".\robot.h" #include "malloc.h" #include "Matrix.h" //#define _USE_MATH_DEFINES 1 #include <math.h> #include "Robot.h" #include <stdio.h> #define ENCODER_TICKS_PER_REVOLUTION 32000 #define RADIANS_PER_ENCODER_TICK (2 * M_PI / (RFLOAT)ENCODER_TICKS_PER_REVOLUTION) #define VELOCITY_DECAY 0.7 //#define ACCELERATION_DECAY 0.4 #define DEGREES (180.0 / M_PI) #define SERVO_VOLTS_PER_DEGREE (5.0 / 180.0) // 5 V, 180 degree range #define TORQUE_PER_AMP (10.0 / 1.0) // 10 Nm, 1 A #define FIXED_POINT_MAX (255)
Page 150 of 274
//#define INVERSE_MAX_SERVO_RADIANS (1.0 / (180.0 * DEGREES)) #define CAMERA_SIMULATION //#define ROBOT_SIMULATION #define MANUAL_CONTROL //#define _DEBUG //#define LAZY_SIMULATOR // define this to plant values into the robot's state Robot::Robot(int linknumber_, RFLOAT* mass,RFLOAT DHinit[6][4], RFLOAT rcinit[6][3],RFLOAT inertiainit[6][6], RFLOAT* gravity,RFLOAT* coulomb_,RFLOAT* viscous_,RFLOAT* posGains_, RFLOAT* velGains_,RFLOAT* torqueConstants_,RFLOAT* gearRatios_,RFLOAT* currentLimits_,RFLOAT* rotorInertia_,RFLOAT* initialPos_,RFLOAT* readPos_) #ifdef ROBOT_SIMULATION : serial(NULL), #else : serial("//./com1"), #endif Jarrad(FALSE), #ifdef CAMERA_SIMULATION visionSerial(NULL) //"//./com5") #else visionSerial("//./com5") #endif for(int i=0;i<3;i++) g.s[i] = gravity[i]; LARGE_INTEGER tester;// should not use malloc on a stuct QueryPerformanceFrequency(&tester); __int64 freq = tester.QuadPart; timerPeriod = 1.0/(RFLOAT)freq; linknumber = linknumber_; INT_PTR max = linknumber_; state.SetSize(linknumber); z0.z = 1; for (INT_PTR i = 0; i < max; i++)// for loop reads in and stores all the information required to fully define a robot
Page 151 of 274
for(int j=0;j<3;j++) state[i].actual[j] = 0;//set all initial states to zero state[i].desired[j] = 0; state[i].linkmass = mass[i]; state[i].coulomb = coulomb_[i]; state[i].viscous = viscous_[i]; state[i].gearRatios = gearRatios_[i]; state[i].actualTorque = 0; state[i].maxTorque = 1.0; // BRETT FIND ME: INPUT MAXIMUM TORQUES AND SET THIS VALUE APPROPRIATELY state[i].posGain =posGains_[i]; state[i].velGain =velGains_[i]; state[i].encoder = 0; state[i].torqueConstants = torqueConstants_[i]; state[i].currentLimits = currentLimits_[i]; state[i].rotorInertia = rotorInertia_[i]; state[i].initialPos = initialPos_[i]; state[i].readPos = readPos_[i]; // state[i].prevEncoderDiff = 0; for(int i = 0; i<linknumber; i++)// read in the DH parameters for(int j=0; j<4;j++) DH[i][j] = DHinit[i][j]; for(int k=0;k < linknumber;k++)// read each links vector to centre of gravity state[k].rc = MVector(rcinit[k][0], rcinit[k][1], rcinit[k][2]); //#error check resizing for (INT_PTR i = 0; i < max ; i++)// read int the initia and rotation matrices state[i].I[0] = inertiainit[i][0]; state[i].I[1] = inertiainit[i][3]; state[i].I[2] = inertiainit[i][5]; state[i].I[3] = inertiainit[i][3]; state[i].I[4] = inertiainit[i][1]; state[i].I[5] = inertiainit[i][4]; state[i].I[6] = inertiainit[i][5]; state[i].I[7] = inertiainit[i][4]; state[i].I[8] = inertiainit[i][2];
Page 152 of 274
state[i].R[6] = 0; state[i].R[7] = sin(DH[i][0]); state[i].R[8] = cos(DH[i][0]); state[i].rPos = MVector(DH[i][1], DH[i][3] * sin(DH[i][0]), DH[i][3] * cos(DH[i][0])); UpdateState();// update state calculates all the parameteres required for torque calculations Robot::~Robot(void) //free(endPositions); int Robot::GetLinkNum() return linknumber; void Robot::ReadShotSelect()// reads in the lookup table for the predefined shot endpositions FILE *stream = fopen( "inputtest.txt", "r+" ); float fp; fscanf(stream,"%d",&row); fscanf(stream,"%d", &col); shotAngles = (RFLOAT*)malloc(linknumber*row*col*sizeof(RFLOAT)); //index for each shot = x*linknumner*numVertCoord + y*linknumber for(int i=0;i<linknumber*col*row;i++) fscanf(stream,"%f",&fp); shotAngles[i] = fp; void Robot::SetMode(SHOTMODE modeset) mode = modeset; Serial Robot::GetSerial() return serial;
Page 153 of 274
Robot::SHOTMODE Robot::GetMode() return mode; void Robot::ReadShotSelect2() FILE *stream = fopen( "inputtest.txt", "r+" ); RFLOAT fp; int numHorCoord; int numVertCoord; if( stream == NULL ) printf( "The file fscanf.out was not opened\n" ); else fseek( stream, 0L, SEEK_SET ); fscanf(stream,"%d", &numHorCoord); fscanf(stream,"%d", &numVertCoord); while(!feof( stream )) for(int j = 0; j<numVertCoord;j++) RFLOAT* floattest= (RFLOAT*)malloc(linknumber* sizeof(RFLOAT)); for(int i = 0;i<linknumber;i++) fscanf(stream,"%f",&fp); floattest[i]=fp; //printf("%f",fp); CArray<RFLOAT*> me; me.Add(floattest); shotPositions.Add(&me); int ccc = 0; // return FALSE if bad, TRUE if succeeded BOOL Robot::UpdateRealMotors() RFLOAT* currents = (RFLOAT*)malloc(linknumber*sizeof(RFLOAT));
Page 154 of 274
GetCurrents(currents); //GetCurrents will update the desired currents into current variable DWORD_PTR max = 6; // * linknumber; for (int i=0 ; i < 6; i++) if(i < 6-linknumber) desiredCurrents[i] = 0; else desiredCurrents[i] = currents[i-(6-linknumber)]; RFLOAT test = state[linknumber-1].desired[0]; desiredCurrents[5] = (state[linknumber-1].desired[0]+M_PI/2)/M_PI*2317; #ifdef MANUAL_CONTROL// sets the output to the GUI slider values if in MANUAL_CONTROL mode for (int i=0 ; i < 5; i++) if(i < 6-linknumber) desiredCurrents[i] = 0; else desiredCurrents[i] = state[i-(6-linknumber)].sliderValue; float last = state[i-(6-linknumber)].sliderValue+ 255; last = last/512*2317; desiredCurrents[i] = last; //loop through the slider values and add them to desired currents #endif if(!Emerg)// sends 0 current to motors if emergency stop is pressed for(int i =0;i<6;i++) desiredCurrents[i] = 0;
Page 155 of 274
for(int i = 0;i <linknumber; i++) #ifdef _DEBUG FILE* file = fopen("dump.m", "a"); ASSERT(file); fprintf(file, "micro(%d+1,%d-1) = %d;\n ",i,number, desiredCurrents[(6-linknumber)+i]); fclose(file); #endif __int16 start = 0x7F7E; __int16 end = 0x7D7C; serial.WriteWordLittleEndian(start);// send startbit for (DWORD_PTR i = 0; i < max; i++)// send 6 current values note: extra values are ignored if less then six links if (!serial.WriteWordLittleEndian(desiredCurrents[i])) return FALSE; serial.WriteWordLittleEndian(end); // other serial port / 3 motors done here return TRUE; void Robot::InitialSetup()// does all the seting up that can be done offline number = 1; RFLOAT velocity[6] = 0,0,0,0,0,0; RFLOAT acceleration[6] = 0,0,0,0,0,0; shotcount = 0; READYTOPLAY = false; shotfound = false; Emerg = true; for(int i=0;i<this->GetLinkNum();i++) state[i].readypos = state[i].readPos;
Page 156 of 274
state[i].encoder = (int)(state[i].initialPos / RADIANS_PER_ENCODER_TICK); state[i].actual[0] = state[i].encoder * RADIANS_PER_ENCODER_TICK; state[i].actual[0] = state[i].initialPos; state[i].actual[1] = velocity[i]; state[i].actual[2] = acceleration[i]; #ifdef _DEBUG FILE* file = fopen("dump.m", "a"); ASSERT(file); fprintf(file, "desiredpos(%d+1,%d) = %f;\n ",i,number, state[i].actual[0]); fprintf(file, "actualpos(%d+1,%d) = %f;\n ",i,number, state[i].actual[0]); fclose(file); #endif // BRETT FIND ME: NOTE: desired and desiredTorques ARE NOT INITIALISED!!! //this->SetMode(READY);// set to hold ready position UpdateState(); this->NewtonEuler(); #ifdef _DEBUG for(int i=0; i<linknumber; i++) FILE* file = fopen("dump.m", "a"); ASSERT(file); fprintf(file, "torque(%d+1,%d) = %f;\n ",i,number, state[i].desiredTorques); fclose(file); #endif first = true; number = 2; //send the initialization to the micro __int16 start = 0x7F7E; __int16 end = 0x7D7C; serial.WriteWordLittleEndian(start); for (int i = 0; i <6; i++) if (!serial.WriteWordLittleEndian(0));
Page 157 of 274
serial.WriteWordLittleEndian(end); void Robot::UpdateDesiredTrajectory() switch(mode)// one big switch statement that updates trajectory imformation depending on what state case SETUP:// makes trajectory to rise arm to readpios for(int i=0;i<linknumber;i++) RFLOAT shotValues[3]; JointTrajectories.GetAt(i)->EvaluateTrajectory(shotValues,CurrentShotTime); state[i].desired[0] = shotValues[0]; state[i].desired[1] = shotValues[1]; state[i].desired[2] = shotValues[2]; check[i] = state[i].desired[0]; break; case READY:// holds the ready position //set the desired positions to readpos and acc and vel to zero for(int i=0;i<linknumber;i++) state[i].desired[0] = state[i].readypos; state[i].desired[1] = 0;//quick fix for now assuming shots end at stationary pos state[i].desired[2] = 0; break; case INSHOT:// sends current values of trajecotry during active shot for(int i=0;i<linknumber;i++) RFLOAT shotValues[3]; JointTrajectories.GetAt(i)->EvaluateTrajectory(shotValues,CurrentShotTime); state[i].desired[0] = shotValues[0]; state[i].desired[1] = shotValues[1]; state[i].desired[2] = shotValues[2]; check[i] = state[i].desired[0];
Page 158 of 274
break; case ENDSHOT: // holds the shot position at contact: will not be required for shots with non-zero conditions // can be ignored by setting its shot time zero in change trajectory for(int i=0;i<linknumber;i++) state[i].desired[0] = state[i].endAngle; state[i].desired[1] = 0;//quick fix for now assuming shots end at stationary pos state[i].desired[2] = 0; break; case RETURNSHOT:// returns the arm to the ready position once the shot is complete for(int i=0;i<linknumber;i++) RFLOAT shotValues[3]; JointTrajectories.GetAt(i)->EvaluateTrajectory(shotValues,CurrentShotTime); state[i].desired[0] = shotValues[0]; state[i].desired[1] = shotValues[1]; state[i].desired[2] = shotValues[2]; //check[i] = state[i].desired[0]; break; case SHUTDOWN:// returns the arm to its 'limp' position for(int i=0;i<linknumber;i++) RFLOAT shotValues[3]; JointTrajectories.GetAt(i)->EvaluateTrajectory(shotValues,CurrentShotTime); state[i].desired[0] = shotValues[0]; state[i].desired[1] = shotValues[1]; state[i].desired[2] = shotValues[2]; //check[i] = state[i].desired[0]; break; case SAFE: for(int i=0;i<linknumber;i++) state[i].desired[0] = state[i].initialPos; state[i].desired[1] = 0;//quick fix for now assuming shots end at stationary pos state[i].desired[2] = 0;
Page 159 of 274
break; default: ASSERT(false); #ifdef _DEBUG FILE* file = fopen("dump.m", "a"); fprintf(file, "time = [time(1,:) %f];\n",CurrentShotTime); //printf("Overall shot time: %f\n",OverallShotTime); fprintf(file, "totaltime = [totaltime(1,:) %f];\n",OverallShotTime); fclose(file); #endif for(int i=0;i<linknumber;i++) #ifdef _DEBUG FILE* file = fopen("dump.m", "a"); ASSERT(file); fprintf(file, "desiredpos(%d+1,%d) = %f;\n ",i,number, state[i].desired[0]); fprintf(file, "actualpos(%d+1,%d) = %f;\n ",i,number,state[i].actual[0]); fprintf(file, "actualvel(%d+1,%d) = %f;\n ",i,number,state[i].actual[1]); fclose(file); #endif //get the current shot time void Robot::ConvertCurrents(RFLOAT* retC) //converts the desired torques into the desired currents taking into account the motors torque limits for(int i = 0;i<linknumber;i++) retC[i] = state[i].desiredTorques/state[i].gearRatios/state[i].torqueConstants*1000;
Page 160 of 274
if(retC[i]>state[i].currentLimits) retC[i] = state[i].currentLimits; if(retC[i] < -state[i].currentLimits) retC[i] = -state[i].currentLimits; #ifdef _DEBUG FILE* file = fopen("dump.m", "a"); ASSERT(file); fprintf(file, "currents(%d+1,%d-1) = %f;\n ",i,number, retC[i]); fclose(file); #endif //StartingMultiplier(i)* retC[i] = StartingMultiplier(i)*255*retC[i]/state[i].currentLimits; if(retC[i]>255) retC[i] = 255; if(retC[i]< -255) retC[i] = -255; //ASSERT(retC[i] >= -255); //ASSERT(retC[i] <= 255); float Robot::StartingMultiplier(int currentlink) // lame attempt to overcome the current controller problem float gainmax = 3.0; float stoppoint = 0.5; float currentvelocity; if(state[currentlink].actual[1]<0) currentvelocity = -state[currentlink].actual[1]; else currentvelocity = state[currentlink].actual[1];
Page 161 of 274
float gainvalue; if(currentvelocity > stoppoint) gainvalue = 1; else gainvalue = (1-gainmax)/stoppoint*currentvelocity + gainmax; #ifdef _DEBUG FILE* file = fopen("dump.m", "a"); ASSERT(file); fprintf(file, "gainvalue(%d+1,%d-1) = %f;\n ",currentlink,number, gainvalue); fclose(file); #endif return gainvalue; void Robot::GetCurrents(RFLOAT* retCurrents) // one of the main fucntion loops that returns an array of desired currents BOOL rval = RequestEncoders();// reads in encoders this->UpdateDesiredTrajectory(); ASSERT(rval); for(int i = 0;i<linknumber;i++) check[i] = state[i].desired[0]; RFLOAT velocityError; RFLOAT positionError; for(int i=0;i<this->GetLinkNum();i++)// implements the outer loop control law velocityError = state[i].desired[1] - state[i].actual[1]; positionError = state[i].desired[0] - state[i].actual[0]; state[i].actual[2] = state[i].desired[2] + velocityError*state[i].velGain + positionError*state[i].posGain;
Page 162 of 274
this->UpdateState(); this->NewtonEuler();// updates the required torque imformation for(int i =0; i<linknumber;i++) #ifdef _DEBUG FILE* file = fopen("dump.m", "a"); ASSERT(file); fprintf(file, "acceldesired(%d+1,%d) = %f;\n ",i,number, state[i].desired[2]); fprintf(file, "accelactual(%d+1,%d) = %f;\n ",i,number, state[i].actual[2]); fprintf(file, "torque(%d+1,%d) = %f;\n ",i,number, state[i].desiredTorques); fclose(file); #endif number ++;// number is used for matlab dump RFLOAT* currents = (RFLOAT*)_alloca(sizeof(RFLOAT) * linknumber); this->ConvertCurrents(retCurrents); RFLOAT Robot::GetDesiredCurrent(int index) if (index == 5) return (state[index].actual[0] / state[index].maxTorque); else return (state[index].desiredTorques / state[index].maxTorque); BOOL Robot::RequestEncoders()// calls real or prentend ecoders depending on mode #ifdef ROBOT_SIMULATION return RequestPretendEncoders(); #else
Page 163 of 274
return RequestRealEncoders(); #endif BOOL Robot::UpdateMotors()// updates real or pretend motors depending on mode #ifdef ROBOT_SIMULATION return UpdatePretendMotors(); #else return UpdateRealMotors(); #endif BOOL Robot::RequestRealEncoders() __int16 encoderDiff[6]; encoderDiff[0] = 0x7f7f; WORD word; DWORD_PTR max = 6; int inputCounter = 0; bool buffer_flag = true; bool timeout_flag = true; //start and end keywords for serial protocal WORD keyStart = 0x7F7E; WORD keyEnd = 0x7D7C; timeOutOld = this->ResetTimeCount(); while(buffer_flag && timeout_flag) while(!serial.ReadWordLittleEndian(word)) RFLOAT value = GetTimeOut(); if(value>0.5)//timeout timeout_flag = false; ASSERT(false); if(word == keyStart) buffer_flag = false; if(inputCounter>3) ASSERT(false);
Page 164 of 274
WORD encoderBuffer[12]; int buffer_counter = 0; buffer_flag = true; if(timeout_flag) while(buffer_flag)// loop sends all imformation to the microcontroller timeOutOld = this->ResetTimeCount(); RFLOAT value = GetTimeOut(); while(!serial.ReadWordLittleEndian(word)); value = GetTimeOut(); //RFLOAT value = GetTimeOut(); if(value>0.5) ASSERT(false); if(word == keyEnd) buffer_flag = false; else encoderBuffer[buffer_counter] = word; buffer_counter++; if(buffer_counter == 6)// checks if six bytes were read in between start and end bits // other serial port / 3 motors done here for(int i = 0; i<6;i++) encoderDiff[i] = encoderBuffer[i]; RFLOAT timesincelast = GetTime(); #ifdef _DEBUG FILE* file = fopen("dump.m", "a"); fprintf(file, "timeInc = [timeInc(1,:) %f];\n",timesincelast); fclose(file); #endif RFLOAT frequency = 1.0 / timesincelast; oldCount = this->ResetTimeCount();
Page 165 of 274
// assume encoder ticks are not the differences between successive requests for (int i = 0; i < 6; i++)// updates encoder info taking into account different numbers of links if(i>=(6-linknumber)) state[i-(6-linknumber)].encoder += encoderDiff[i]; state[i-(6-linknumber)].actual[0] = state[i-(6-linknumber)].encoder * RADIANS_PER_ENCODER_TICK; for (int i = 0; i < 6; i++) if(i>=(6-linknumber))// differentiate to find position // Generate velocity data state[i-(6-linknumber)].actual[1] = (1.0 - VELOCITY_DECAY) * state[i-(6-linknumber)].actual[1]+(VELOCITY_DECAY)*encoderDiff[i]*RADIANS_PER_ENCODER_TICK * frequency; /*for (int i=0; i<linknumber;i++) state[i].actual[0] = state[i].desired[0]; state[i].actual[1] = state[i].desired[1]; */ return TRUE; BOOL Robot::RequestPretendEncoders()// calculates the direct dynamics for simulation purposes #ifdef LAZY_SIMULATOR RFLOAT position_fix[6] = 1,2,3,4,5,6; RFLOAT velocity_fix[6] = 7, 8, 9, 10, 11, 12,; RFLOAT acceleration_fix[6] = 13,14,15,16,17,18; for(int i = 0; i<6; i++) state[i].actual[0] = position_fix[i]; state[i].actual[1] = velocity_fix[i]; state[i].actual[2] = acceleration_fix[i]; #else
Page 166 of 274
for(int i=0; i<linknumber;i++) state[i].actualTorque = state[i].desiredTorques; Matrix B(linknumber, linknumber + 1); RFLOAT* tdash = (RFLOAT*)_alloca(sizeof(RFLOAT) * linknumber); RFLOAT* oldAcc = (RFLOAT*)_alloca(sizeof(RFLOAT) * linknumber); // save old accelerations RFLOAT* oldQd = (RFLOAT*)_alloca(sizeof(RFLOAT) * linknumber); MVector oldG; //RFLOAT* answer = (RFLOAT*)_alloca(sizeof(RFLOAT) * linknumber); for (int i = 0; i < linknumber; i++) oldAcc[i] = state[i].actual[2]; oldQd[i] = state[i].actual[1]; state[i].actual[2] = 0; oldG = g; // set acc to zero, compute torque, gives tdash // qdd = 0 this->UpdateState(); NewtonEuler(); for (int i = 0; i < linknumber; i++) tdash[i] = state[i].desiredTorques; // g = 0, qd = 0, qdd = 1 if i=j, 0 otherwise for (int i = 0; i < linknumber; i++) state[i].actual[1] = 0;//sets all the velocities to zero g = MVector(0, 0, 0); for (int col = 0; col < linknumber; col++) state[col].actual[2] = 1; this->UpdateState(); NewtonEuler();
Page 167 of 274
for (int i = 0; i < linknumber; i++) B.SetData(i, col, state[i].desiredTorques); state[col].actual[2] = 0; RFLOAT* x = (RFLOAT*)_alloca(sizeof(RFLOAT) * linknumber); for (int i = 0; i < linknumber; i++) B.SetData(i, linknumber, /*state[i].actual[0]*/(state[i].actualTorque - tdash[i])); //B.printMatrix(); B.Solve(x); //Invert(); for(int i =0; i<linknumber;i++) check[i] = x[i]; // restore old states: g = oldG; // tdash is the new accelerations //#error REQUIRE RUNGE-KUTTA INTEGRATION AND MATRIX INVERSION // Integrate using Heun's method (p 946 Kreyszig), // see p 953 (Adams-Bashforth) for a better version for (int i = 0; i < linknumber; i++) RFLOAT newVel = oldQd[i] + 0.5 * CurrentShotTime * (x[i] + oldAcc[i]); state[i].actual[0] += 0.5 * CurrentShotTime * (oldQd[i] + newVel); state[i].actual[1] = newVel; // state[i].actual[2] = x[i];
Page 168 of 274
for(int i =0; i<linknumber;i++)// this bit is a quick fix should really intergate the accelvalue twice state[i].actual[0] = state[i].desired[0]; state[i].actual[1] = state[i].desired[1]; check[i] = state[i].desired[0]; #endif return TRUE; BOOL Robot::UpdatePretendMotors()// for testing purposes only RFLOAT* currents = (RFLOAT*)malloc(linknumber*sizeof(RFLOAT)); GetCurrents(currents); //GetCurrents will update the desired currents into current variable for (int i=0 ; i < 6; i++)//loop used to store correct currents for various link numbers if(i < 6-linknumber) desiredCurrents[i] = 0; else desiredCurrents[i] = currents[i-(6-linknumber)]; RFLOAT test = state[linknumber-1].desired[0]; desiredCurrents[5] = (state[linknumber-1].desired[0]+M_PI/2)/M_PI*2317; //Converts the desired torques to number that we can actually send //for(int i = for(int i = 0;i <linknumber; i++) #ifdef _DEBUG FILE* file = fopen("dump.m", "a"); ASSERT(file); fprintf(file, "micro(%d+1,%d-1) = %d;\n ",i,number, desiredCurrents[(6-linknumber)+i]); fclose(file); #endif
Page 169 of 274
return TRUE; RFLOAT Robot::GetShotTime()// timing LARGE_INTEGER count; QueryPerformanceCounter(&count); __int64 diff = count.QuadPart-startCount; //return CurrentShotTime += 0.01; return (RFLOAT)diff*timerPeriod; RFLOAT Robot::GetOverallShotTime()//timing LARGE_INTEGER count; QueryPerformanceCounter(&count); __int64 diff = count.QuadPart-startCountOverall; //printf("%f\n",OverallShotTime); //return OverallShotTime += 0.01; return (RFLOAT)diff*timerPeriod; __int64 Robot::ResetTimeCount()// reseting the timer counter LARGE_INTEGER newCount; QueryPerformanceCounter(&newCount); __int64 retcount = newCount.QuadPart; return retcount; RFLOAT Robot::GetTime()//timing LARGE_INTEGER count; QueryPerformanceCounter(&count); __int64 diff = count.QuadPart - oldCount; oldCount = count.QuadPart; return (RFLOAT)diff * timerPeriod; RFLOAT Robot::GetTimeOut()//timing LARGE_INTEGER count; QueryPerformanceCounter(&count); __int64 diff = count.QuadPart - timeOutOld; timeOutOld = count.QuadPart; return (RFLOAT)diff * timerPeriod;
Page 170 of 274
void Robot::UpdateState()// calculates the T matrix for (int i = 0; i < linknumber; i++) RFLOAT dh2 = DH[i][2] = state[i].actual[0]; RFLOAT dh0 = DH[i][0]; /* state[i].T.TransformDH(DH[i]); state[i].T.TransformRot(&state[i].R); state[i].R.InverseTrans(&state[i].invR); */ // state[i].T state[i].R[0] = cos(dh2); state[i].R[1] = -sin(dh2)*cos(dh0); state[i].R[2] = sin(dh2)*sin(dh0); state[i].R[3] = sin(dh2); state[i].R[4] = cos(dh2)*cos(dh0); state[i].R[5] = -cos(dh2)*sin(dh0); const JointState* Robot::GetState(int i)// const means they are not able to be changed outsite return &state[i];// & means that the address of the stuff is returned not the stuff itself void Robot::SetInitialPos(RFLOAT* inpos) for(int i = 0; i<linknumber;i++) state[i].premovepos = inpos[i]; BOOL Robot::GetCamera(int* rval) //float position[3]; //BYTE* address = (BYTE*)position; //size_t max = sizeof(float) * 3; #ifdef CAMERA_SIMULATION
Page 171 of 274
if(shotfound && READYTOPLAY) rval[0] = horpos;//col rval[1] = vertpos;//row rval[2] = 2; shotcount++; shotfound = false; return TRUE; else return FALSE; #else PingPongBallState state; if(Jarrad.GetPos(state)// gets the position info from camera // note needs to be looked into to make more elegant rval[0] = state.position[0]; rval[1] = state.position[1]; rval[2] = state.position[2]; #endif void Robot::GetJointAngles(RFLOAT* rval, int* shotcoord) if(linknumber == 1) rval[0] = 1.2; else for(int i = 0; i<linknumber;i++) //index for each shot = x*linknumner*numVertCoord + y*linknumber int index = i+ shotcoord[0]*col + shotcoord[1]*linknumber*col; //printf("index = %d",index); rval[i] = shotAngles[index];
Page 172 of 274
CString Robot::GetModeString()// for GUI boring CString retstring; switch(mode) case SAFE: retstring = "SAFE TO TURN OFF"; break; case READY: retstring = "READY"; break; case INSHOT: retstring = "INSHOT"; break; case ENDSHOT: retstring = "ENDSHOT"; break; case RETURNSHOT: retstring = "RETURNSHOT"; break; case SHUTDOWN: retstring = "SHUTDOWN"; break; return retstring; void Robot::ChangeTrajectory() //gets the position from the camera //determines the joint angles required to achieve that position from lookup table //sets start position to current postion //sets end position to desired postion OverallShotTime = this->GetOverallShotTime(); if(this->GetMode() == SETUP && first)// generates the setup trajectory RFLOAT startinput[3]; RFLOAT endinput[3] = 0,0,0; for(int i=0;i<linknumber;i++)
Page 173 of 274
state[i].desired[0] = state[i].initialPos; for(int j=0; j<3;j++) //startinput[j]= state[i].actual[j]; startinput[j]= state[i].desired[j]; printf("test = %f\n", state[i].actual[0]); endinput[0] = state[i].readypos; JointTrajectories.Add(new Trajectory( startinput, endinput,TotalShotTimeint )); JointTrajectories.GetAt(i)->Quintic(); startCount = ResetTimeCount(); startCountOverall = ResetTimeCount(); CurrentShotTime = this->GetShotTime(); OverallShotTime = this->GetOverallShotTime(); CurrentShotTime = 0; OverallShotTime = 0; first = false; else if(this->GetMode() == RETURNSHOT && first)// generates the return trajectory RFLOAT startinput[3]; RFLOAT endinput[3] = 0,0,0; //RFLOAT shottimeint = 3; //TotalShotTime = 3; RFLOAT shotTimeReturn = 1; TotalShotTime = shotTimeReturn; //TotalShotTimeint = 2; for(int i=0;i<linknumber;i++) for(int j=0; j<3;j++) //startinput[j]= state[i].actual[j]; startinput[j]= state[i].desired[j];
Page 174 of 274
//printf("test = %f\n", state[i].actual[0]); endinput[0] = state[i].readypos; JointTrajectories.InsertAt(i,new Trajectory( startinput, endinput,TotalShotTime)); JointTrajectories.GetAt(i)->Quintic(); LARGE_INTEGER tester2; QueryPerformanceCounter(&tester2); startCount = tester2.QuadPart;// start the timer for the shot //startCountOverall = tester2.QuadPart; CurrentShotTime = this->GetShotTime(); first = false; else if(this->GetMode() == READY && TURNOFF)// generates shutdown trajectory RFLOAT startinput[3]; RFLOAT endinput[3] = 0,0,0; TotalShotTime = TotalShotTimeint; //TotalShotTimeint = 2; for(int i=0;i<linknumber;i++) for(int j=0; j<3;j++) //startinput[j]= state[i].actual[j]; startinput[j]= state[i].desired[j]; endinput[0] = state[i].initialPos; JointTrajectories.InsertAt(i,new Trajectory( startinput, endinput,TotalShotTime)); JointTrajectories.GetAt(i)->Quintic(); LARGE_INTEGER tester2; QueryPerformanceCounter(&tester2); startCount = tester2.QuadPart;// start the timer for the shot //startCountOverall = tester2.QuadPart; CurrentShotTime = this->GetShotTime(); //OverallShotTime = this->GetOverallShotTime();
Page 175 of 274
//CurrentShotTime = 0; //OverallShotTime = 0; this->SetMode(SHUTDOWN); else int* coord = (int*)malloc(3* sizeof(int)); //can update trajectory while inshot if (this->GetCamera(coord)&& READYTOPLAY)// checks camera for update RFLOAT* endpositions = (RFLOAT*)malloc(linknumber*sizeof(RFLOAT)); this->GetJointAngles(endpositions, coord); for(int i=0; i<linknumber;i++) state[i].endAngle = endpositions[i]; CurrentShotTime = 0; TotalShotTime = 0.5; OverallShotTime; RFLOAT startinput[3]; RFLOAT endinput[3] = 0,0,0;// this should give the end result having velocity and acceleration = 0; for(int i=0;i<linknumber;i++) for(int j=0; j<3;j++) startinput[j]= state[i].actual[j]; //problem have not deleted the trajectoreis endinput[0] = endpositions[i]; printf("Position %d: = %f\n", i,startinput[0]); JointTrajectories.InsertAt(i,new Trajectory( startinput, endinput, TotalShotTime)); //JointTrajectories.Add(new Trajectory( startinput, endinput, TotalShotTime)); JointTrajectories.GetAt(i)->Quintic(); LARGE_INTEGER tester2; QueryPerformanceCounter(&tester2); startCount = tester2.QuadPart;// start the timer for the shot CurrentShotTime = this->GetShotTime();
Page 176 of 274
CurrentShotTime = 0; this->SetMode(INSHOT); // shot in progress //printf("MODE: INSHOT, TIME: %f", CurrentShotTime); else switch(mode)// determines what mode your in via various transitions conditions case RETURNSHOT: CurrentShotTime = this->GetShotTime(); if(CurrentShotTime >= TotalShotTime) this->SetMode(READY); break; case SETUP: CurrentShotTime = this->GetShotTime(); if(CurrentShotTime >= TotalShotTimeint) this->SetMode(READY); break; case INSHOT: CurrentShotTime = this->GetShotTime(); if(CurrentShotTime >= TotalShotTime) LARGE_INTEGER tester2; QueryPerformanceCounter(&tester2); startCount = tester2.QuadPart;// start the timer for the shot CurrentShotTime = this->GetShotTime(); this->SetMode(ENDSHOT); first = true; //printf("MODE: ENDSHOT, TIME: %f", CurrentShotTime); break; case READY: CurrentShotTime = this->GetShotTime(); break; case ENDSHOT: CurrentShotTime = this->GetShotTime();
Page 177 of 274
if(CurrentShotTime >=0.5) this->SetMode(RETURNSHOT); first = true; break; case SHUTDOWN: CurrentShotTime = this->GetShotTime(); if(CurrentShotTime >= TotalShotTimeint) this->SetMode(SAFE); break; //printf(" hello2 %f: \n", shotcount); void Robot::NewtonEuler()//This is the dynamics calculation section see Brett if want to change as is confusing MVector oldw, dv, olddw, dw, w, ddpc, dwm, force, moment, torque; MVector ddp(g); /* for(int i = 0; i<linknumber;i++) state[i].q; state[i].qd; state[i].qdd; */ MVector* formoment = (MVector*)_alloca(sizeof(MVector) * linknumber); MVector* forforce = (MVector*)_alloca(sizeof(MVector) * linknumber); for(int i=0;i<linknumber; i++) //equation 1: MVector temp(z0); temp *= state[i].qd; temp += oldw; temp.FastPostRotate(state[i].R, w); w; //equation 2: temp = oldw; temp *= state[i].qd; MVector cross1(temp, z0);
Page 178 of 274
temp = z0; temp *= state[i].qdd; temp += cross1; temp += olddw; temp.FastPostRotate(state[i].R, dw); //equation 3: MVector::CrossProduct(w, state[i].rPos, cross1); MVector cross2(w, cross1); MVector::CrossProduct(dw, state[i].rPos, cross1); MVector temp2(ddp); temp2.FastPostRotate(state[i].R, ddp); ddp += cross1; ddp += cross2; //equation 4; MVector::CrossProduct(w, state[i].rc, cross1); MVector::CrossProduct(w, cross1, cross2); MVector::CrossProduct(dw, state[i].rc, cross1); ddpc = ddp; ddpc += cross1; ddpc += cross2; //equation 5; dw.FastRotate(state[i].I, temp); w.FastRotate(state[i].I, cross2); MVector::CrossProduct(w, cross2, cross1); temp += cross1; formoment[i] = temp; oldw = w; olddw = dw; //equation 6; temp = ddpc; temp *= state[i].linkmass; forforce[i] = temp; int i = linknumber - 1; MVector cross2(force, state[i].rc); force += forforce[i];
Page 179 of 274
MVector add1 = state[i].rc; add1 += state[i].rPos; MVector cross1(force, add1); MVector product1(-cross1.x, -cross1.y, -cross1.z); moment += product1; // moment *= 2; // MVector clone = moment; #error - ask Brett if we can * by 2 here??? // moment += clone; moment += cross2; moment += formoment[i]; // tmoment->TranslateMatrix(moment); moment.FastRotate(state[i].R, product1); RFLOAT torque = product1.DotProduct(z0); // IMPORTANT #error: SIGN NOT SIN FOR COULOMB FRICTION: state[i].desiredTorques = torque + state[i].viscous * state[i].actual[1] + state[i].coulomb * sign(state[i].actual[1]); //check[i] = state[i].desiredTorques; for(int i=linknumber-2;i>=0; i--) MVector product2, product3; product3.x; force.FastRotate(state[i + 1].R, product3); force = product3; force += forforce[i]; MVector add1 = state[i].rc; add1 += state[i].rPos; MVector cross1(force, add1); MVector product1(-cross1.x, -cross1.y, -cross1.z); //term 1 moment.FastRotate(state[i + 1].R, product2); //term 2 MVector::CrossProduct(product3, state[i].rc, cross1); //term3 moment = product1; moment += product2; moment += cross1; moment += formoment[i]; //torque equation // tmoment->TranslateMatrix(moment);
Page 180 of 274
moment.FastRotate(state[i].R, product1); RFLOAT torque = product1.DotProduct(z0); state[i].desiredTorques = torque + state[i].viscous * state[i].actual[1] + state[i].coulomb * sign(state[i].actual[1]); //check[i] = state[i].desiredTorques;
Page 181 of 274
Appendix E – Vision System Code
// JSurfaceCalibration.h #pragma once #include "JSurface.h" #include "JSurfaceVideo.h" #include "../ProgramCore/Array.h" #include "../PhysicsCore/Transform.h" /* NOTE: This implementation uses JSurfaceVideo as the base class of video and when OnUpdate() is called, tell JSurfaceCalibration to do something about it. Note that subpixel cutoffs can be done later for more accuracy by looking for the closest boundary and interpolating between the on and off pixels. Cutoffs can be 'smudged' so that their values are consistent with their neighbours. */ class CalibrationSet; class JCalibrationProperty : public JProperty private: int dimension; static LPCTSTR title[3]; public: JCalibrationProperty(int setDimension); void OnValue(); private: static LPCTSTR GetTitle(int dimension) return title[dimension]; ; class JCalibrationButton : public JButton public: JCalibrationButton(const CString& caption) : JButton(caption) public: void OnMouseClick(); // tell the calibration set that we are done! ; class JCalibrationQuery : public JContainer private: CalibrationSet* parent; public: JCalibrationQuery(CalibrationSet* setParent); public: CalibrationSet* GetCalibrationSet() return parent; void OnPos(); ; class CalibrationSet public: Point2D* data; // distance from screen, with an unknown, but fixed reference point across all samples float dx, dy, dz; BOOL waiting;
Page 182 of 274
BOOL done; BOOL keepSampling; public: CalibrationSet(int x, int y); ~CalibrationSet(); BOOL Query(JDevice* device); ; /* The Calibration class should be associated with JSurfaceVideo and automatically loaded and saved as appropriate according to the camera's unique name. Store calibration data in camera.clb ** IMPORTANT NOTE FIND ME LATER: BE CAREFUL WITH THE PATH OF camera.clb ** Procedure for storing calibration: - if camera exists already, overwrite it - - if camera does not yet exist, add it - Procedure for recalling calibration (part of JSurfaceVideo initialisation): - if camera exists in calibration file, load the calibration - - have a BOOL HasCalibration() function that can be called on JSurfaceVideo - */ class Calibration ALIGN(Calibration) private: ThreeD* data; // array of normalised vectors associated with each camera pixel float* error; // an error estimate of each camera pixel CString name; int x; int y; Transform transform; INT_PTR referenceCamera; // referenceCamera is -1 for reference cameras, but // the referenceCamera is the ID of the other camera. // if referenceCamera is -1, transform is invalid int margin; public: Calibration(int setX, int setY, int setMargin = 5); Calibration() x = y = margin = 0; data = 0; error = 0; ~Calibration(); public: void Serialize(CArchive& archive); // save to camera.clb (.clb - .calibration) void SetName(const CString& setName) name = setName; void GetName(CString& rval) const rval = name; int GetX() const return x; int GetY() const return y; void SetError(int xx, int yy, float value) error[xx + x * yy] = value; float GetError(int xx, int yy) const return error[xx + x * yy]; void SetData(int xx, int yy, const ThreeD& value) data[xx + x * yy] = value; void GetData(int xx, int yy, ThreeD& rval) const rval = data[xx + x * yy]; int GetMargin() return margin; Calibration* Clone() const; ;
Page 183 of 274
class CalibrationFile private: BOOL dirty; Array<Calibration*> calibration; public: CalibrationFile(); ~CalibrationFile(); public: // serialization automatically occurs during constructor (load) and destructor (save) void Serialize(CArchive& archive); // save to camera.clb (.clb - .calibration) void UpdateCamera(Calibration& camera); // update a camera const Calibration* GetCalibration(CString& cameraName); // return NULL if calibration not found const Calibration* GetCalibration(INT_PTR index) return calibration[index]; INT_PTR GetSize() return calibration.GetSize(); // the calibration returned by GetCalibration() should be cloned if it is to be stored // in JSurfaceVideo void Save(); // save to disk (if dirty) (automatically called by destructor) ; class JSurfaceCalibration : public JSurface public: enum CalibrationStage CS_SQUARE, // CS_ is calibration stage, do not want to confuse with SQUARE(x) CS_BLACK, CS_WHITE, PRE_PASS_TOP_ACROSS, PASS_TOP_ACROSS, PASS_TOP_BACK, PASS_LEFT_ACROSS, PASS_LEFT_BACK, PASS_UP_ACROSS, PASS_UP_BACK, PASS_RIGHT_ACROSS, PASS_RIGHT_BACK, CS_BOUNDARY, // stay at this one when we are done CS_DONE, // we're finished NUM_CALIBRATION_STAGES // CLEAN_UP ; static const int DEFAULT_MIN_THRESHOLD_VALUE; private: class JSurfaceCalibrationVideo : public JSurfaceVideo private: JSurfaceCalibration* calibration; // tell calibration when new data is available
Page 184 of 274
public: JSurfaceCalibrationVideo(JDevice* setDevice, JSurfaceCalibration* setCalibration, DWORD setOptions = NULL); void OnUpdate() calibration->OnUpdate(); ; private: JSurfaceCalibrationVideo* video; JSurface* output; CalibrationStage stage; int frame; int numHoldFrames; int squareSize; int greenLight; // user must give go-ahead for some actions (-1 implies end the run) int progress; int minThresholdRatio; ///////////////////////////////////////////////////////// // CALIBRATION DATA - SCREEN-CAMERA PIXEL CORRELATION: // ///////////////////////////////////////////////////////// // Vertical or horizontal cutoffs (of camera (not screen) columns / rows), // compute correlation prior to next stage: float* vcutoff; float* hcutoff; int* threshold; // the threshold for each camera pixel (if on the screen) // -1 implies not on screen, otherwise r+g+b must be at least threshold // to count as on for that pixel // threshold is computed in CS_BLACK, CS_WHITE, and used in the pass stages // after CS_BLACK, threshold was the r+g+b for that camera pixel, // after CS_WHITE, threshold is the average between the two r+g+b's if // their difference was sufficient (defined by a contrast variable), // otherwise it is -1. // we could perform threshold tests multiple times or blur to reduce error. CalibrationSet* current; Array<CalibrationSet*> cameraToScreen; // the final correlation in a sample is in this array int numSamplesX; int numSamplesY; // we must divide coordinates by (numSamples / 2) to get average correlation. // each sample only adds a x or y component ONLY (not both x and y, // which is why we divide by (numSamples / 2), not numSamples). // numSamples may (or may not) be different for different pixels // threshold == -1 may be sufficient to use numSamples for the total picture // and not for each pixel //////////////////////////////////////////////////////////////// // END OF CALIBRATION DATA - SCREEN-CAMERA PIXEL CORRELATION. // //////////////////////////////////////////////////////////////// ThreeD* result; // the normal vectors corresponding to each pixel
Page 185 of 274
private: static const int JSURFACE_CALIBRATION_DEFAULT_NUM_HOLD_FRAMES = 1; static const int DEFAULT_SQUARE_SIZE = 20; public: JSurfaceCalibration(JDevice* setDevice, int setX, int setY, DWORD setOptions = NULL); virtual ~JSurfaceCalibration(); public: virtual void Disappear() virtual void Reappear() void Calibrate(); void Go() if (greenLight == 0) greenLight = 1; // good to go void Stop() if (greenLight == 0) greenLight = -1; // stop void ResizeSquare(BOOL bigger); JSurface* GetVideo() return output; // use this to display what the camera sees private: void Coordinate(); void Correlate(); void OnUpdate(); // new video data void NewSample(); void CompleteSample(); void CombineSamples(); void VerticalCutoff(); void HorizontalCutoff(); BOOL SetThreshold(int stage); static BOOL IsVerticalStage(CalibrationStage stage); static BOOL DoesModeInvert(CalibrationStage stage); ; class JCalibrationVideo; class JCalibrationVideoSurface : public JSurfaceCalibration private: JCalibrationVideo* video; public: JCalibrationVideoSurface(JDevice* setDevice, int setX, int setY, JCalibrationVideo* setVideo, DWORD setOptions = NULL) : JSurfaceCalibration(setDevice, setX, setY, setOptions) video = setVideo; void Disappear(); void Reappear(); ; class JCalibrationVideo : public JControl private: JCalibrationVideoSurface* video; public: JCalibrationVideo() : JControl(ST_STRETCH) video = NULL; SetAlwaysDraw(TRUE); virtual ~JCalibrationVideo() delete video;
Page 186 of 274
public: void OnDevice() NewVideo(); void NewVideo() delete video; if (GetWidth() && GetHeight()) video = new JCalibrationVideoSurface(device, GetWidth(), GetHeight(), this); else video = NULL; void OnDraw() if (device->GetSurface()) device->GetSurface()->SetClipRegion(NULL); RECT dcPos = 0, 0, video->GetX(), video->GetY(); if (video->HasSurface()) // static int yyy; // yyy++; // if (yyy == 10) yyy = 0; // device->GetSurface()->SetCoded(JSurface::grey); // device->GetSurface()->FillRect(NULL); // video->SetCoded(JSurface::red); // RECT rect = 0, 0, 500, 500; // if (yyy == 0) video->FillRect(&rect); device->GetSurface()->Draw(dcPos, video, dcPos); //* // RECT rect0 = 0, 0, 1400, 1050 ; // JSurface rubbish(device, 1400, 1050); // rubbish.SetCoded(JSurface::blue); // rubbish.FillRect(NULL); // rubbish.Draw(rect, video, rect); // device->GetSurface()->Draw(rect0, &rubbish, rect0); //*/ void OnPos() NewVideo(); JSurfaceCalibration* GetSurface() return video; ;
Page 187 of 274
// JSurfaceCentroid.h #pragma once #include "JSurfaceVideo.h" #include "JSurface.h" class JSurface2 : public JSurface public: JSurface2(JDevice* setDevice, DWORD setOptions = NULL) : JSurface(setDevice, 0, 0, setOptions | JS_TEXTURE) ; class JSurfaceCentroid : public JSurface private: JSurface* original; public: JSurfaceCentroid(JDevice* setDevice, DWORD setOptions = NULL); virtual ~JSurfaceCentroid(void); public: void OnInitSurface(); int DetermineCentroid(COLORREF low, COLORREF high, int& x, int& y, float variation = 0.2f); void Parse(); void Redimension(); ; // JSurfacePingPong.h #pragma once #include "../JGL/JSurfaceCentroid.h" class JSurfacePingPong : public JSurfaceCentroid public: JSurfacePingPong(JDevice* setDevice, DWORD setOptions = NULL); virtual ~JSurfacePingPong(); public: void Parse(); ; // JSceneCameraSetup.h #pragma once #include "../JGL\JSceneMobile.h" class JSceneCameraSetup : public JSceneMobile public: JSceneCameraSetup(int setMaxPrimitives = 5120, DWORD setType = ST_STRETCH); virtual ~JSceneCameraSetup(); ;
Page 188 of 274
// JSurfaceVideo.h #pragma once #include "JSurface.h" #include "JVideoFilter.h" //#include <stllock.h> #include <dshow.h> #define REGISTER_FILTERGRAPH class JVideoFilter; class JSurfaceVideo : public JSurface public: static const int COLOUR_CHANNELS = 6; static const int COLOURS = 3; private: CCritSec criticalSection; CComPtr<IGraphBuilder> graphBuilder; CComPtr<IMediaControl> mediaControl; CComPtr<IMediaEvent> mediaEvent; CString name; int pitch; int differentiations; // data is in the form: data[(colour * yMax + y) * xMax + x)] -red,green,blue // data is in word form to reduce quantisation error, but not at the computational // cost of using a DWORD per colour per pixel, which would be excessive short* data; //int amount[X_RES * Y_RES * 6]; public: JSurfaceVideo(JDevice* setDevice, DWORD setOptions = NULL); virtual ~JSurfaceVideo(); public: virtual void OnUpdate() // called whenever new data is present public: HRESULT Initialise(); HRESULT CaptureVideo(IBaseFilter* pRenderer); HRESULT FindCaptureDevice(IBaseFilter** ppSrcFilter); HRESULT CopyMediaSample( IMediaSample* pSample); void CleanUp(); void CheckMovieStatus(); void CreateSurface(int width, int height, int setPitch); #ifdef REGISTER_FILTERGRAPH HRESULT AddToROT(IUnknown *pUnkGraph); void RemoveFromROT(); DWORD rotRegistration; #endif public: // void Update(LPVIDEOHDR lpVHdr); // update with video information
Page 189 of 274
virtual void Parse(); // do image processing from data to image virtual void OnInitSurface(); void BlurMedian(int iterations = 1, BOOL xy = FALSE); void Blur(int iterations = 1, BOOL xy = FALSE); void DeContrast(int deviation = 64); // if deviation is zero, integer division can occur // void EnhanceEdges(); // not written yet void CreateXY(); // required before differentiation void Differentiate(int iterations); void DifferentiateRelative(int iterations = 1, int deviation = 16); void SumAbsolute(); // required after differentiation void SumSquares(); // required after differentiation void CrossProductCutOff(); void DotProductCutOff(); void BothProductsCutOff(); void Scale(float factor, BOOL xy = FALSE); /* void Process(int iterations); void Process2(int iterations); void Process3(int iterations); */ short* Offset(int col, int xx, int yy) return data + ((col * GetY() + yy) * GetX() + xx); int MaxX() return GetX() - differentiations; int MaxY() return GetY() - differentiations; void GetName(CString& rval) rval = name; ; // JCameraFOV.h #pragma once #include "../JGL/JModel.h" #include "../JGL/JSurfaceCalibration.h" // Calibrated Camera's Field of View visualisation class // 9 May 2005, 12:21 pm, Jarrad Springett: // WE ALSO WANT TO DRAW CORNER LINES (AND THE CENTRE LINE) THROUGH THE CAMERA class JCameraFOV : public JModel private: Calibration* calibration; public: JCameraFOV(const Calibration* setCalibration); virtual ~JCameraFOV(); public: void OnDraw(JScene* scene, int maxPrimitives); ;
Page 190 of 274
// JSurfaceCalibration.cpp #include "stdafx.h" #include "JSurfaceCalibration.h" #define SURFACE_CALIBRATION_SQUARE_INCREMENT 10 //#define FAST_DATA const int JSurfaceCalibration::DEFAULT_MIN_THRESHOLD_VALUE = 15; LPCTSTR JCalibrationProperty::title[3] = "dx (mm)", "dy (mm)", "dz (mm)" ; JSurfaceCalibration::JSurfaceCalibration(JDevice* setDevice, int setX, int setY, DWORD setOptions) : JSurface(setDevice, setX, setY, setOptions) // ASSERT(FALSE); // some of these things may need to be reset between experiment runs: video = new JSurfaceCalibrationVideo(setDevice, this); output = new JSurface(setDevice, video->GetX(), video->GetY()); numHoldFrames = JSURFACE_CALIBRATION_DEFAULT_NUM_HOLD_FRAMES; frame = numHoldFrames; stage = CS_SQUARE; squareSize = DEFAULT_SQUARE_SIZE; greenLight = 0; minThresholdRatio = DEFAULT_MIN_THRESHOLD_VALUE; vcutoff = (float*)malloc(sizeof(float) * video->GetX() * setY); hcutoff = (float*)malloc(sizeof(float) * setX * video->GetY()); threshold = (int*)malloc(sizeof(int) * setX * setY); NewSample(); JSurfaceCalibration::~JSurfaceCalibration() // stage = CLEAN_UP; free(threshold); free(hcutoff); free(vcutoff); delete output; delete video; INT_PTR max = cameraToScreen.GetSize(); for (INT_PTR index = 0; index < max; index++) delete cameraToScreen[index]; JSurfaceCalibration::JSurfaceCalibrationVideo::JSurfaceCalibrationVideo(JDevice* setDevice, JSurfaceCalibration* setCalibration, DWORD setOptions) : JSurfaceVideo(setDevice, setOptions | JS_TEXTURE) calibration = setCalibration; void JSurfaceCalibration::Calibrate() /* PROCEDURE: Note: A multimonitor display may be beneficial, so that the JSurfaceVideo output can be on the non-LCD screen, while the test occurs on the LCD screen. This is not required.
Page 191 of 274
Display a square on the screen, allow user to orient camera. [PRESS ENTER] Display black, record threshold, display white, record threshold for each pixel individually. IF A PIXEL DOES NOT CHANGE MUCH, IT IS NOT PART OF THE SCREEN. DO NOT RECORD ANY CALIBRATION INFORMATION FOR IT!!! Display vertical and horizontal passes, with waiting and then sampling from the JSurfaceVideo, the sampling returns the vertical / horizontal offset of the colour-boundary. After processing, a box on the screen is drawn representing the camera boundary and a cross-hair for the centre of the image is drawn. THE SAMPLING RESULTS ARE THEN RECORDED FOR PROCESSING. PROCESSING RESULTS: For each camera pixel, find the vertical and horizontal bounds that encompass its centre and interpolate to find a corresponding virtual screen pixel (with subpixel accuracy). This generates a CAMERA->SCREEN PIXEL CORRELATION The procedure is repeated to generate sets of CAMERA->SCREEN PIXEL CORRELATIONS. These correlations are associated with horizontal / vertical / depth-wise offsets from an arbitrary origin (the first sample can be at the origin, if desired). FOV angle / DPI CALIBRATION and CAMERA-VECTOR-CORRELATION: The physical offsets of the CAMERA-SCREEN calibration stages are combined to produce a CAMERA-VECTOR CORRELATION, with FOV and DPI outputs. CAMERA-RELATIVE CALIBRATION STAGE: After storing camera-vector (and camera-screen data, although no longer required) calibration data to disk, we can run a similar program with cameras that share overlapped fields of view. We run the black / white test to see which pixels are in the view. We then run vertical / horizontal scans across the screen to correlate camera-screen pixels. The camera-screen pixel correlations are given to the camera-vector correlation and used to determine the physical transform between the two camera's central aiming vectors and the camera's focal points (as given by the intersection of the camera vectors, get someone to write a program that determines the best intersection point from a set of vectors that approximately intersect). The four corners may be the easiest points to use to correspond these cameras. By having auxillary views, non-overlapping field of view cameras can have their transforms correlated through intermediate cameras that are subsequently removed. */
Page 192 of 274
if (frame >= numHoldFrames) // use stage, progress and video to coordinate screen to camera prior to next stage switch (stage) case CS_SQUARE: SetCoded(black); FillRect(NULL); RECT rect = (GetX() >> 1) - squareSize, (GetY() >> 1) - squareSize, (GetX() >> 1) + squareSize, (GetY() >> 1) + squareSize; SetCoded(white); FillRect(&rect); if (greenLight) stage = CS_BLACK; numHoldFrames = 1; break; case CS_BLACK: if (greenLight) SetCoded(black); FillRect(NULL); greenLight = 0; // if (SetThreshold(0)) stage = CS_WHITE; numHoldFrames = 7; break; case CS_WHITE: SetCoded(white); FillRect(NULL); // stage = PASS_TOP_ACROSS; progress = 0; if (SetThreshold(0)) stage = PRE_PASS_TOP_ACROSS; numHoldFrames = 7; break; case PRE_PASS_TOP_ACROSS: if (SetThreshold(1)) stage = PASS_TOP_ACROSS; numHoldFrames = 1; break; case PASS_TOP_ACROSS: if (progress <= GetY())
Page 193 of 274
Coordinate(); SetCoded(black); FillRect(NULL); SetCoded(white); RECT rect = 0, 0, GetX(), progress ; FillRect(&rect); #ifdef FAST_DATA progress += 10; #else progress++; #endif else Correlate(); #ifdef FAST_DATA stage = PASS_LEFT_ACROSS; progress = 0; #else progress = GetY() - 1; stage = PASS_TOP_BACK; #endif break; case PASS_TOP_BACK: if (progress >= 0) Coordinate(); SetCoded(black); FillRect(NULL); SetCoded(white); RECT rect = 0, 0, GetX(), progress ; FillRect(&rect); progress--; else Correlate(); progress = 0; stage = PASS_LEFT_ACROSS; break; case PASS_LEFT_ACROSS: if (progress <= GetX()) Coordinate(); SetCoded(black); FillRect(NULL); SetCoded(white); RECT rect = 0, 0, progress, GetY() ; FillRect(&rect); #ifdef FAST_DATA progress += 10; #else progress++; #endif else Correlate();
Page 194 of 274
#ifdef FAST_DATA progress = 0; greenLight = 0; stage = CS_BOUNDARY; #else progress = GetX() - 1; stage = PASS_LEFT_BACK; #endif break; case PASS_LEFT_BACK: if (progress >= 0) Coordinate(); SetCoded(black); FillRect(NULL); SetCoded(white); RECT rect = 0, 0, progress, GetY() ; FillRect(&rect); progress--; else Correlate(); progress = GetY() - 1; stage = PASS_UP_ACROSS; break; case PASS_UP_ACROSS: if (progress >= 0) Coordinate(); SetCoded(black); FillRect(NULL); SetCoded(white); RECT rect = 0, progress, GetX(), GetY(); FillRect(&rect); progress--; else Correlate(); progress = 0; stage = PASS_UP_BACK; break; case PASS_UP_BACK: if (progress <= GetY()) Coordinate(); SetCoded(black); FillRect(NULL); SetCoded(white); RECT rect = 0, progress, GetX(), GetY(); FillRect(&rect); progress++; else
Page 195 of 274
Correlate(); progress = GetX() - 1; stage = PASS_RIGHT_ACROSS; break; case PASS_RIGHT_ACROSS: if (progress >= 0) Coordinate(); SetCoded(black); FillRect(NULL); SetCoded(white); RECT rect = progress, 0, GetX(), GetY() ; FillRect(&rect); progress--; else Correlate(); progress = 0; stage = PASS_RIGHT_BACK; break; case PASS_RIGHT_BACK: if (progress <= GetX()) Coordinate(); SetCoded(black); FillRect(NULL); SetCoded(white); RECT rect = progress, 0, GetX(), GetY() ; FillRect(&rect); progress++; else Correlate(); progress = 0; greenLight = 0; stage = CS_BOUNDARY; break; case CS_BOUNDARY: // draw the boundary now that we are done! if (greenLight) // next sample // set sample distance in order to complete the sample: // Query returns false while waiting for user input, // and goes away before it returns true: Disappear(); if (current->Query(GetDevice())) greenLight = 0; if (current->keepSampling) Reappear();
Page 196 of 274
CompleteSample(); //sampleDistance); NewSample(); stage = CS_SQUARE; else // stop and process Reappear(); CompleteSample(); CombineSamples(); stage = CS_DONE; else if (Lock()) SetCoded(black); FillRectLocked(NULL); SetCoded(yellow); int xx = video->GetX(); int yy = video->GetY(); int cx = xx / 2; int cy = yy / 2; #ifdef FAST_DATA CPoint topLeft((int)current->data[5 + 5 * xx].x, (int)current->data[5 + 5 * xx].y); CPoint topRight((int)current->data[xx - 1 - 5 + 5 * xx].x, (int)current->data[xx - 1 - 5 + 5 * xx].y); CPoint bottomLeft((int)current->data[5 + (yy - 6) * xx].x, (int)current->data[5 + (yy - 6) * xx].y); CPoint bottomRight((int)current->data[xx - 6 + (yy - 6) * xx].x, (int)current->data[xx - 6 + (yy - 6) * xx].y); CPoint centre((int)current->data[xx * cy + cx].x, (int)current->data[xx * cy + cx].y); #else CPoint topLeft((int)current->data[5 + 5 * xx].x / 4, (int)current->data[5 + 5 * xx].y / 4); CPoint topRight((int)current->data[xx - 1 - 5 + 5 * xx].x / 4, (int)current->data[xx - 1 - 5 + 5 * xx].y / 4); CPoint bottomLeft((int)current->data[5 + (yy - 6) * xx].x / 4, (int)current->data[5 + (yy - 6) * xx].y / 4); CPoint bottomRight((int)current->data[xx - 6 + (yy - 6) * xx].x / 4, (int)current->data[xx - 6 + (yy - 6) * xx].y / 4); CPoint centre((int)current->data[xx * cy + cx].x / 4, (int)current->data[xx * cy + cx].y / 4); #endif DrawLine(topLeft, topRight); DrawLine(topRight, bottomRight); DrawLine(bottomRight, bottomLeft); DrawLine(bottomLeft, topLeft); SetCoded(green); DrawCrossHair(centre); Unlock(); break; case CS_DONE: SetCoded(black); FillRect(NULL); break;
Page 197 of 274
default: ASSERT(FALSE); if (frame > 0) frame = 0; // Coordinate the screen / camera results for the previous frame: void JSurfaceCalibration::Coordinate() // determine cutoffs: /* We determine the cutoffs by counting the number of white and threshold pixels in the pass (a pass is a row / column depending on whether we are scanning vertically or horizontally). If the number of threshold pixels is above a minimum, say 5% as default, then set the cutoff as #white pixels, skipping over pixels below the threshold. NOTE: This requires 2 passes - the #white, #threshold and the #white to skip over below threshold pixels, giving us our cutoff for the row / column. If we were below threshold, set the cutoff to -1 */ if (stage >= PASS_TOP_ACROSS && stage <= PASS_RIGHT_BACK) if (IsVerticalStage(stage)) VerticalCutoff(); else HorizontalCutoff(); void JSurfaceCalibration::VerticalCutoff() // the cutoffs are in video - we need to lock video and get those cutoffs! if (video->Lock()) BOOL invert = DoesModeInvert(stage); DWORD* data = (DWORD*)video->GetData(); // count columns: int maxX = video->GetX(); int maxY = video->GetY(); int minCount = (int)((float)maxY * (float)progress / (float)GetY());//(float)minThresholdRatio); for (int xx = 0; xx < maxX; xx++) int count = 0; int above = 0; for (int yy = 0; yy < maxY; yy++) if (threshold[yy * maxX + xx] >= 0) count++; DWORD colour = data[yy * maxX + xx];
Page 198 of 274
if (GetR(colour) + GetG(colour) + GetB(colour) >= threshold[yy * maxX + xx]) if (!invert) above++; else if (invert) above++; // now skip above pixels into threshold: if (count >= minCount) count = 0; for (int yy = 0; yy < maxY; yy++) if (threshold[yy * maxX + xx] >= 0) if (count >= above) break; count++; // ASSERT(yy < maxY); // count is the vertical cutoff #ifdef FAST_DATA for (int i = 0; i < 10; i++) vcutoff[progress + i + xx * GetY()] = (float)count; #else vcutoff[progress + xx * GetY()] = (float)count; #endif else #ifdef FAST_DATA for (int i = 0; i < 10; i++) vcutoff[progress + i + xx * GetY()] = -1.0f; #else vcutoff[progress + xx * GetY()] = -1.0f; #endif video->Unlock(); BOOL JSurfaceCalibration::DoesModeInvert(CalibrationStage stage) switch (stage) case PASS_UP_ACROSS: case PASS_UP_BACK: case PASS_RIGHT_ACROSS: case PASS_RIGHT_BACK: return TRUE; default: return FALSE;
Page 199 of 274
void JSurfaceCalibration::HorizontalCutoff() // the cutoffs are in video - we need to lock video and get those cutoffs! if (video->Lock()) BOOL invert = DoesModeInvert(stage); DWORD* data = (DWORD*)video->GetData(); // count columns: int maxX = video->GetX(); int maxY = video->GetY(); int minCount = (int)((float)maxX * (float)progress / (float)GetX()); //maxX; //(int)((float)maxX * minThresholdRatio); for (int yy = 0; yy < maxY; yy++) int count = 0; int above = 0; for (int xx = 0; xx < maxX; xx++) if (threshold[yy * maxX + xx] >= 0) count++; DWORD colour = data[yy * maxX + xx]; if (GetR(colour) + GetG(colour) + GetB(colour) >= threshold[yy * maxX + xx]) if (!invert) above++; else if (invert) above++; // now skip above pixels into threshold: if (count >= minCount) count = 0; for (int xx = 0; xx < maxX; xx++) if (threshold[yy * maxX + xx] >= 0) if (count >= above) break; count++; // ASSERT(xx < maxX); // count is the vertical cutoff #ifdef FAST_DATA for (int i = 0; i < 10; i++) hcutoff[progress + i + yy * GetX()] = (float)count; #else hcutoff[yy * GetX() + progress] = (float)count; #endif else
Page 200 of 274
#ifdef FAST_DATA for (int i = 0; i < 10; i++) hcutoff[progress + i + yy * GetX()] = -1.0f; #else hcutoff[yy * GetX() + progress] = -1.0f; #endif video->Unlock(); static int CompareFloats(const void* arg0, const void* arg1) float v0 = *(float*)arg0; float v1 = *(float*)arg1; if (v0 > v1) return 1; if (v0 < v1) return -1; return 0; void JSurfaceCalibration::Correlate() // add cutoff values to the cameraToScreen correlation: /* We have our cutoff values, we need to add the relevant coordinates into the current calibration set. If we were a vertical (top/bottom) pass, update the y coordinates of the calibration set, (horizontal updates x coordinates). To update, any cutoffs that are non-negative are used. (-1 implies below threshold) We skip the rows, searching for a cutoff of 0.5 (possibly between screen pixels), which maps to pixel 0. We search for cutoff of 1.5, mapping to camera pixel 1, and so on, mapping all pixels that we can. Below threshold pixels are not mapped. We search for a cutoff of 1 to 2, using the average as the mapping from camera row 1 to the screen. All threshold pixels should map to screen pixels (in floating point, as they can lie between pixels). */ // ASSERT(FALSE); // not coded yet // int mx = 0; if (IsVerticalStage(stage)) // vertical correlation // for each column, find the cutoffs that match our pixel for (int xx = 0; xx < video->GetX(); xx++) // mx = xx;
Page 201 of 274
// sort the vcutoffs qsort(&vcutoff[xx * GetY()], GetY(), sizeof(float), CompareFloats); int cameraY = 0; for (int yy = 0; yy < GetY(); yy++) while (vcutoff[yy + xx * GetY()] > cameraY) current->data[cameraY * video->GetX() + xx].y += yy; cameraY++; else for (int yy = 0; yy < video->GetY(); yy++) // sort the hcutoffs qsort(&hcutoff[yy * GetX()], GetX(), sizeof(float), CompareFloats); int cameraX = 0; for (int xx = 0; xx < GetX(); xx++) while (hcutoff[xx + yy * GetX()] > cameraX) current->data[yy * video->GetX() + cameraX].x += xx; cameraX++; void JSurfaceCalibration::OnUpdate() // *** WE NEED A MUTEX HERE AND ON DELETION *** // // a new video image has been captured, ensure that the minimum number of frames are captured: // optionally merge subsequent frames' data for better results??? if (frame < numHoldFrames) frame++; // go through the video and update pixels for output: if (video->Lock()) if (output->Lock()) COLOUR* read = (COLOUR*)video->GetData(); COLOUR* write = (COLOUR*)output->GetData(); int pitch = video->GetX(); ASSERT(output->GetX() == video->GetX() && output->GetY() == video->GetY()); for (int yy = 0; yy < video->GetY(); yy++) for (int xx = 0; xx < pitch; xx++) int t = threshold[yy * pitch + xx]; COLOUR colour = read[yy * pitch + xx];
Page 202 of 274
if ((stage >= PASS_TOP_ACROSS && stage <= PASS_RIGHT_BACK) && t >= 0) if (GetR(colour) + GetG(colour) + GetB(colour) >= t) write[yy * pitch + xx] = RGB2(255, 255, 0); else write[yy * pitch + xx] = RGB2(0, 255, 255); else write[yy * pitch + xx] = colour; output->Unlock(); video->Unlock(); void JSurfaceCalibration::CompleteSample() /* Ensure the sample distance is updated. We need to get current and divide the samples' data by (numSamplesX for x, numSamplesY for y) */ //ASSERT(FALSE); // not coded yet // get the dx, dy, dz for (int yy = 0; yy < video->GetY(); yy++) for (int xx = 0; xx < video->GetX(); xx++) #ifndef FAST_DATA current->data[xx + yy * video->GetX()].x /= 4; // 4 horizontal samples current->data[xx + yy * video->GetX()].y /= 4; // 4 vertical samples #endif void JSurfaceCalibration::NewSample() current = new CalibrationSet(video->GetX(), video->GetY()); current->dx = 0; current->dy = 0; current->dz = 0; cameraToScreen.Add(current); numSamplesX = 0; numSamplesY = 0; void JSurfaceCalibration::CombineSamples() /* We can determine DPI here from dx, dy (116.667 or 130 DPI for my screen?)
Page 203 of 274
Camera-screen mappings should be stored to disk. Camera-screen mappings should be viewable as a text dump with vertical and horizontal FOV. Camera-screen mappings should be viewable as a 2D vector field, or as a 3D vector model. Can we perform a verification test? relative distances between samples can be found by taking the distance from each corner of the image - distances should be directly proportional to this amount. Pick the pair of samples that h ave the largest difference in pixels and use these. Their dz difference * (larger sample perimeter / smaller sample perimeter) gives the distance to the further sample - providing a dz scale factor and offset. The dz scale factor can be used to scale all results for dx and dy back to one plane. The largest dx difference is used to give a scale factor and offset for dx. The largest dy difference is used to give a scale factor and offset for dy. Now dx,dy and dz correspond to actual focal point positions of the camera samples in 3D space above the screen, so for each camera pixel, determine the unit vector for each sample that it corresponds to from the camera's focal point, adding these unit vectors from multiple samples (where the samples were within the threshold) and then normalising the result. We end up with a unit vector for each camera pixel and a DPIx and DPIy estimate of the screen that was used for the sample (these DPI will be used in the combined camera calibration stage). To perform the final combined camera calibration, three points in space must be identifiable from each image and these must correspond. Allow the user to specify points that correspond to each other across various images. */ // this is the part where some easy vector maths may occur // I should get a set of 3D points that lie along the line of the pixel // This line becomes the pixels' normal // first, we must the centrepoint of the 2 images: /* The centrepoint is a certain ratio from all projections. If the ratio is further from 1, move the centrepoint away, if the ratio is closer to 1, bring the centrepoint in. Our first guess is the intersection of the top two corners. */ // see 12 March 2005, Ping Pong Robot Notebook 2 for details: // Cramer's Rule is on p342 of Kreyszig: int numPoints = video->GetX() * video->GetY(); ASSERT(numPoints); ASSERT(cameraToScreen.GetSize() == 2); // 2 samples please float a11 = cameraToScreen[1]->data[0].x - cameraToScreen[0]->data[0].x;
Page 204 of 274
float a12 = cameraToScreen[1]->data[video->GetX() - 1].x - cameraToScreen[0]->data[video->GetX() - 1].x; float a21 = cameraToScreen[1]->data[0].y - cameraToScreen[0]->data[0].y; float a22 = cameraToScreen[1]->data[video->GetX() - 1].y - cameraToScreen[0]->data[video->GetX() - 1].y; float b1 = cameraToScreen[0]->data[video->GetX() - 1].x - cameraToScreen[0]->data[0].x; float b2 = cameraToScreen[0]->data[video->GetX() - 1].y - cameraToScreen[0]->data[0].y; float det = a11 * a22 - a12 * a21; float cx = 0; float cy = 0; ASSERT(det != 0); if (det != 0) cx = (b1 * a22 - a12 * b2) / det; cy = (a11 * b2 - b1 * a21) / det; cx *= cameraToScreen[1]->data[0].x - cameraToScreen[0]->data[0].x; cx += cameraToScreen[0]->data[0].x; cy *= cameraToScreen[1]->data[0].y - cameraToScreen[0]->data[0].y; cy += cameraToScreen[0]->data[0].y; float ratio = 0; // jiggle for cx,cy for 16 iterations: for (int i = 0; i < 16; i++) ratio = 0; float remainder = 0; // the remainder reduces floating point error for (int yy = 0; yy < video->GetY(); yy++) for (int xx = 0; xx < video->GetX(); xx++) float x0 = cameraToScreen[0]->data[xx + video->GetX() * yy].x; float y0 = cameraToScreen[0]->data[xx + video->GetX() * yy].y; float x1 = cameraToScreen[1]->data[xx + video->GetX() * yy].x; float y1 = cameraToScreen[1]->data[xx + video->GetX() * yy].y; // the +1 reduces error close to the cx,cy point if (x0 >= cx) x0 += 1; else x0 -= 1; if (y0 >= cy) y0 += 1; else y0 -= 1; float thisRatioX = (x1 - cx) / (x0 - cx); float thisRatioY = (y1 - cy) / (y0 - cy); float thisRatio = thisRatioX + thisRatioY; float newRatio = ratio + thisRatio; remainder += thisRatio - (newRatio - ratio); ratio = newRatio; ratio += remainder; ratio /= numPoints * 2; // now see where to jiggle cx,cy to:
Page 205 of 274
float newCX = 0; float newCY = 0; float rCX = 0; float rCY = 0; for (int yy = 0; yy < video->GetY(); yy++) for (int xx = 0; xx < video->GetX(); xx++) float x0 = cameraToScreen[0]->data[xx + video->GetX() * yy].x; float y0 = cameraToScreen[0]->data[xx + video->GetX() * yy].y; float x1 = cameraToScreen[1]->data[xx + video->GetX() * yy].x; float y1 = cameraToScreen[1]->data[xx + video->GetX() * yy].y; float thisCX = (x1 - ratio * x0) / (1 - ratio); float thisCY = (y1 - ratio * y0) / (1 - ratio); float _newCX = newCX + thisCX; rCX += thisCX - (_newCX - newCX); newCX = _newCX; float _newCY = newCY + thisCY; rCY += thisCY - (_newCY - newCY); newCY = _newCY; newCX += rCX; newCY += rCY; newCX /= numPoints; newCY /= numPoints; cx = newCX; cy = newCY; // the centrepoint has been jiggled into position, and we have the ratio // ratio and dz's imply the distance to the screen of the first calibration set ASSERT(ratio != 1); float dz0 = cameraToScreen[0]->dz * GetDevice()->GetDPMx() * 0.001f; float dz1 = cameraToScreen[1]->dz * GetDevice()->GetDPMx() * 0.001f; float screenDis0 = (dz1 - dz0) / (ratio - 1); dz1 += screenDis0 - dz0; dz0 = screenDis0; // we have the screen distance of the frames!!! add this distance to all dz's //#error "add screenDis0 to all dz's here" // now we have dz's wrt screen and x and y's wrt centrepoint, so we can // simply take vectors of either screen and (optionally) average them // or use their difference to produce an error map: //#error scan each calibration set and use their average normalised vector, record the max error and its coordinate for output //#error record the maximum error (max distance between normals) for each point // then we are done! (but we also require a mechanism for inputting dx, dy, dz)
Page 206 of 274
Calibration* calibration = new Calibration(video->GetX(), video->GetY()); CString name; video->GetName(name); calibration->SetName(name); for (int yy = 0; yy < video->GetY(); yy++) for (int xx = 0; xx < video->GetX(); xx++) ThreeD normal( cameraToScreen[0]->data[xx + yy * video->GetX()].x - cx, cameraToScreen[0]->data[xx + yy * video->GetX()].y - cy, dz0 ); ThreeD other( cameraToScreen[1]->data[xx + yy * video->GetX()].x - cx, cameraToScreen[1]->data[xx + yy * video->GetX()].y - cy, dz1 ); VERIFY(normal.normalise()); VERIFY(other.normalise()); ThreeD diff = normal; diff.subtract(&other); float error = diff.magnitude(); normal.add(&other); VERIFY(normal.normalise()); calibration->SetData(xx, yy, normal); calibration->SetError(xx, yy, error); // serialize calibration to camera.clb: CalibrationFile calibrationFile; calibrationFile.UpdateCamera(*calibration); // read in the entire camera calibration file, // update (or create) this camera's entry // close the file // OLD NOTES: // by moving the camera reference backwards / forwards we can align the // pixels with each other // after we get the average alignment (or on a per-pixel basis), // we can correlate pixels to vectors // convert pixels to the original image coordinates // scale factor determines distance of original image from screen // distance + x, y gives a vector, normalise and we're done BOOL JSurfaceCalibration::IsVerticalStage(CalibrationStage stage) switch (stage) case PASS_TOP_ACROSS: case PASS_TOP_BACK: case PASS_UP_ACROSS: case PASS_UP_BACK: return TRUE; default:
Page 207 of 274
return FALSE; Calibration::Calibration(int setX, int setY, int setMargin) x = setX; y = setY; margin = setMargin; if (margin > x / 2) margin = x / 2; if (margin > y / 2) margin = y / 2; data = (ThreeD*)malloc(sizeof(ThreeD) * x * y); error = (float*)malloc(sizeof(float) * x * y); Calibration::~Calibration() free(data); free(error); void Calibration::Serialize(CArchive& archive) int version = 1; if (archive.IsStoring()) archive << version; archive << x << y << referenceCamera; archive << margin; archive << name; transform.Serialize(archive); repeat(yy, y) repeat(xx, x) data[xx + x * yy].Serialize(archive); archive << error[xx + x * yy]; else archive >> version; archive >> x >> y >> referenceCamera; if (version >= 1) archive >> margin; archive >> name; transform.Serialize(archive); data = (ThreeD*)realloc(data, sizeof(ThreeD) * x * y); error = (float*)realloc(error, sizeof(float) * x * y); repeat(yy, y) repeat(xx, x) data[xx + x * yy].Serialize(archive); archive >> error[xx + x * yy];
Page 208 of 274
Calibration* Calibration::Clone() const // return a copy of the calibration Calibration* clone = new Calibration(x, y); clone->SetName(name); clone->transform = transform; clone->referenceCamera = referenceCamera; repeat(yy, y) repeat(xx, x) clone->data[xx + yy * x] = data[xx + yy * x]; clone->error[xx + yy * x] = error[xx + yy * x]; return clone; CalibrationFile::CalibrationFile() dirty = FALSE; //#error "attempt to load (and close) from camera.clb" CFile file; if (file.Open("camera.clb", CFile::modeRead | CFile::shareDenyWrite)) CArchive archive(&file, CArchive::load); Serialize(archive); CalibrationFile::~CalibrationFile() Save(); // delete all of the calibrations repeat(i, calibration.GetSize()) delete calibration[i]; // serialization automatically occurs during constructor (load) and destructor (save) // save to camera.clb (.clb - .calibration) void CalibrationFile::Serialize(CArchive& archive) int version = 1; if (archive.IsStoring()) int size = calibration.GetSize(); archive << version; archive << size; repeat(i, size) calibration[i]->Serialize(archive); else int size; archive >> version;
Page 209 of 274
archive >> size; repeat(i, size) Calibration* next = new Calibration(); next->Serialize(archive); calibration.Add(next); // update a camera void CalibrationFile::UpdateCamera(Calibration& camera) // find the camera's name in us if we have it already CString cameraName; camera.GetName(cameraName); repeat(i, calibration.GetSize()) CString storedName; calibration[i]->GetName(storedName); if (cameraName == storedName) // found it delete calibration[i]; calibration.SetAt(i, camera.Clone()); return; // camera not detected, so add it: calibration.Add(camera.Clone()); // return NULL if calibration not found // the calibration returned by GetCalibration() should be cloned if it is to be stored // in JSurfaceVideo const Calibration* CalibrationFile::GetCalibration(CString& cameraName) repeat(i, calibration.GetSize()) CString storedName; calibration[i]->GetName(storedName); if (cameraName == storedName) // found it return calibration[i]; // camera not found return NULL; // save to disk (if dirty) (automatically called by destructor) void CalibrationFile::Save() // CFile, CArchive CFile file; if (file.Open("camera.clb", CFile::modeCreate | CFile::modeWrite | CFile::shareExclusive)) CArchive archive(&file, CArchive::store);
Page 210 of 274
Serialize(archive); else ASSERT(FALSE); MessageBox(NULL, "Calibration file failed to save to disk!", "Error", MB_OK); CalibrationSet::CalibrationSet(int x, int y) data = (Point2D*)malloc(sizeof(Point2D) * x * y); dx = dy = dz = 0; waiting = FALSE; done = FALSE; int max = x * y; for (int i = 0; i < max; i++) data[i].x = 0.0f; data[i].y = 0.0f; CalibrationSet::~CalibrationSet() delete data; BOOL CalibrationSet::Query(JDevice* device) if (done) waiting = FALSE; done = FALSE; return TRUE; if (!waiting) waiting = TRUE; // create some user inputs for dx, dy, dz and done: RECT rect = 0, 0, device->GetWidth(), device->GetHeight() ; JCalibrationQuery* query = new JCalibrationQuery(this); device->Add(query); query->SetPos(&rect); return FALSE; JCalibrationQuery::JCalibrationQuery(CalibrationSet* setParent) parent = setParent; repeat(i, 3) Add(new JCalibrationProperty(i)); Add(new JCalibrationButton("Continue")); // when clicked, delete the parent Add(new JCalibrationButton("Finish")); void JCalibrationQuery::OnPos()
Page 211 of 274
int yoffset = 10; repeat(i, 3) RECT rect = 10, yoffset, 300, yoffset + 50 ; JCalibrationProperty* property = (JCalibrationProperty*)GetControl(i); property->SetPos(&rect); yoffset += 60; RECT rect = 10, yoffset, 300, yoffset + 50 ; JCalibrationButton* button = (JCalibrationButton*)GetControl(3); button->SetPos(&rect); //yoffset += 60; RECT rect0 = 310, yoffset, 600, yoffset + 50 ; GetControl(4)->SetPos(&rect0); JCalibrationProperty::JCalibrationProperty(int setDimension) : JProperty(GetTitle(setDimension), JE_NUMERIC_ONLY) dimension = setDimension; void JCalibrationProperty::OnValue() // update the calibration set: CalibrationSet* set = ((JCalibrationQuery*)GetParent())->GetCalibrationSet(); switch (dimension) case 0: set->dx = GetValue(); break; case 1: set->dy = GetValue(); break; case 2: set->dz = GetValue(); break; void JCalibrationButton::OnMouseClick() // remove the parent from the screen and tell the calibration set we're done CalibrationSet* set = ((JCalibrationQuery*)GetParent())->GetCalibrationSet(); LPCTSTR text = GetText(); if (text[0] == 'C') set->keepSampling = TRUE; else set->keepSampling = FALSE; set->done = TRUE; JContainer* parent = GetParent(); JContainer* parent2 = parent->GetParent(); parent2->Remove(parent2->GetIndex(parent)); delete parent; void JCalibrationVideoSurface::Disappear() video->SetVisible(FALSE); void JCalibrationVideoSurface::Reappear() video->SetVisible(TRUE); void JSurfaceCalibration::ResizeSquare(BOOL bigger)
Page 212 of 274
if (stage == CS_SQUARE) stage = CS_SQUARE; frame = numHoldFrames; if (bigger) squareSize += SURFACE_CALIBRATION_SQUARE_INCREMENT; else squareSize -= SURFACE_CALIBRATION_SQUARE_INCREMENT; BOOL JSurfaceCalibration::SetThreshold(int stage) if (video->Lock()) COLOUR* data = (COLOUR*)video->GetData(); ASSERT(data); int pitch = video->GetX(); if (stage == 0) // set the threshold values to the colour on the screen repeat(yy, video->GetY()) repeat(xx, video->GetX()) COLOUR colour = data[yy * pitch + xx]; threshold[yy * pitch + xx] = GetR(colour) + GetG(colour) + GetB(colour); else ASSERT(stage == 1); // compare the current values to the previous threshold repeat(yy, video->GetY()) repeat(xx, video->GetX()) COLOUR colour = data[yy * pitch + xx]; int difference = GetR(colour) + GetG(colour) + GetB(colour) - threshold[yy * pitch + xx]; int average = threshold[yy * pitch + xx] + (difference / 2); if (difference > minThresholdRatio) threshold[yy * pitch + xx] = average; else threshold[yy * pitch + xx] = -1; video->Unlock(); return TRUE; return FALSE;
Page 213 of 274
// JSurfaceCentroid.cpp #include "stdafx.h" #include ".\jsurfacecentroid.h" JSurfaceCentroid::JSurfaceCentroid(JDevice* setDevice, DWORD setOptions) : JSurface(setDevice, setOptions) original = NULL; Redimension(); JSurfaceCentroid::~JSurfaceCentroid() delete original; original = NULL; void JSurfaceCentroid::OnInitSurface() // JSurfaceVideo::OnInitSurface(); Redimension(); void JSurfaceCentroid::Redimension() if (GetDevice()) if (HasSurface()) delete original; original = new JSurface(GetDevice(), GetX(), GetY()); void JSurfaceCentroid::Parse() // copy original image, then set it to black for centroid drawing // JSurfaceVideo::Parse(); RECT rect = 0, 0, GetX(), GetY() ; if (original) original->Draw(rect, this, rect); // SetCoded(JSurface::black); // FillRect(&rect); // return value is # of pixels detected (can be used for a rough distance estimate, or // to see if any pixels were detected) int JSurfaceCentroid::DetermineCentroid(COLORREF low, COLORREF high, int& x, int& y, float variation) if (!original) return 0; BYTE loRed = GetRValue(low); BYTE loGreen = GetGValue(low); BYTE loBlue = GetBValue(low); BYTE hiRed = GetRValue(high); BYTE hiGreen = GetGValue(high); BYTE hiBlue = GetBValue(high); float diffRed = 1.0f / (float)(hiRed - loRed); float diffGreen = 1.0f / (float)(hiGreen - loGreen); float diffBlue = 1.0f / (float)(hiBlue - loBlue); // go through the original image and mark the centroid image
Page 214 of 274
COLOUR mid = RGB2( ((int)loRed + (int)hiRed) >> 1, ((int)loGreen + (int)hiGreen) >> 1, ((int)loBlue + (int)hiBlue) >> 1); int rval = 0; x = 0; y = 0; if (original->Lock()) if (Lock()) if ((GetX() == original->GetX()) && (GetY() == original->GetY())) COLOUR* pos = (COLOUR*)original->GetData(); COLOUR* dest = (COLOUR*)GetData(); uint opitch = original->GetPitch() >> 2; uint pitch = GetPitch() >> 2; int maxY = GetY(); int maxX = GetX(); for (int yy = 0; yy < maxY; yy++) for (int xx = 0; xx < maxX; xx++) COLOUR colour = *pos; *dest = 0; if (GetR(colour) >= loRed && GetR(colour) <= hiRed) if (GetG(colour) >= loGreen && GetG(colour) <= hiGreen) if (GetB(colour) >= loBlue && GetB(colour) <= hiBlue) float red = (GetR(colour) - loRed) * diffRed; float green = (GetG(colour) - loGreen) * diffGreen; float blue = (GetB(colour) - loBlue) * diffBlue; if (fabs(red - green) + fabs(green - blue) + fabs(red - blue) < variation) rval++; x += xx; y += yy; (*dest) = mid; pos++; dest++; pos += opitch - GetX(); dest += pitch - GetX(); if (rval)
Page 215 of 274
SetCoded(JSurface::green); CPoint point(x / rval, y / rval); DrawCrossHair(point); Unlock(); original->Unlock(); return rval;
Page 216 of 274
// JSurfacePingPong.cpp #include "stdafx.h" #include "JSurfacePingPong.h" JSurfacePingPong::JSurfacePingPong(JDevice* setDevice, DWORD setOptions) : JSurfaceCentroid(setDevice, setOptions) JSurfacePingPong::~JSurfacePingPong() void JSurfacePingPong::Parse() JSurfaceCentroid::Parse(); int x; int y; // white ball // int z = DetermineCentroid(RGB(190, 178, 160), RGB(211, 205, 195), x, y, 0.8f); // orange ball //int z = DetermineCentroid(RGB(220, 115, 12), RGB(248, 245, 172), x, y, 0.8f); int z = DetermineCentroid(RGB(180, 10, 0), RGB(230, 145, 90), x, y, 1.6f); // JSceneCameraSetup.cpp #include "stdafx.h" #include ".\JSceneCameraSetup.h" #include "./JGridModel.h" #include "JCameraFOV.h" JSceneCameraSetup::JSceneCameraSetup(int setMaxPrimitives, DWORD setType) : JSceneMobile(FALSE, setMaxPrimitives, setType) // JGridModel* grid = new JGridModel(0.20f, 0.20f, 50, 50); // AddModel(grid); CalibrationFile file; const Calibration* calibration = file.GetCalibration(1); // grid->SetPos(Vector64(0, 0, 0)); JCameraFOV* camera0 = new JCameraFOV(calibration); AddModel(camera0); camera0->SetPos(Vector64(-0.3f, 0, 0)); /* camera0 = new JCameraFOV(file.GetCalibration(1)); AddModel(camera0); camera0->SetPos(Vector64(0.3f, 0, 0)); */ SetPosition(D3DVECTOR(0, 1, -2)); JSceneCameraSetup::~JSceneCameraSetup()
Page 217 of 274
// JSurfaceVideo.cpp #include "stdafx.h" #include "JSurfaceVideo.h" // #define VIDEO_EDGE_CUT_OFF 24 #ifdef DOT_PRODUCT_CUT_OFF #define SHORT_CUT_OFF #endif #ifdef CROSS_PRODUCT_CUT_OFF #define SHORT_CUT_OFF #endif #ifdef BOTH_VECTOR_CUT_OFFS #define SHORT_CUT_OFF #endif #define MAX_NUMBER_OF_EYES 17 CString eye[MAX_NUMBER_OF_EYES]; int numEyes; // this class gets camera data // see CaptureTex9 demo for details JSurfaceVideo::JSurfaceVideo(JDevice* setDevice, DWORD setOptions) : data(NULL), JSurface(setDevice, 0, 0, setOptions | JS_TEXTURE) pitch = 0; Initialise(); #ifdef REGISTER_FILTERGRAPH rotRegistration = 0xabcdef32; #endif JSurfaceVideo::~JSurfaceVideo() #ifdef REGISTER_FILTERGRAPH RemoveFromROT(); // Remove graph from Running Object Table #endif // Shut down the graph if(mediaControl) mediaControl->Stop(); free(data); HRESULT JSurfaceVideo::Initialise() HRESULT hr = S_OK; CComPtr<IBaseFilter> pRenderer; // Texture Renderer Filter CComPtr<IBaseFilter> pFSrc; // Source Filter CComPtr<IPin> pFSrcPinOut; // Source Filter Output Pin JVideoFilter* pCTR; // DShow Texture renderer // Create the filter graph hr = graphBuilder.CoCreateInstance(CLSID_FilterGraph, NULL, CLSCTX_INPROC); if( FAILED(hr)) MessageBox(NULL, "Failed to create filter graph.", "JSurfaceVideo", MB_OK); return hr;
Page 218 of 274
// Get the graph's media control and media event interfaces graphBuilder.QueryInterface(&mediaControl); graphBuilder.QueryInterface(&mediaEvent); // Create the Texture Renderer object pCTR = new JVideoFilter(this, hr); GetDevice()->Handle(hr); pRenderer = pCTR; GetDevice()->Handle(graphBuilder->AddFilter(pRenderer, L"Texture Renderer")); GetDevice()->Handle(CaptureVideo( pRenderer )); #ifdef REGISTER_FILTERGRAPH // Register the graph in the Running Object Table (for debug purposes) AddToROT(graphBuilder); #endif // Start the graph running GetDevice()->Handle(mediaControl->Run()); return hr; HRESULT JSurfaceVideo::CaptureVideo(IBaseFilter* pRenderer) HRESULT hr = S_OK; CComPtr<IBaseFilter> pSrcFilter; CComPtr<ICaptureGraphBuilder2> capture; // Helps to render capture graphs // Create the capture graph builder object to assist in building // the video capture filter graph GetDevice()->Handle(CoCreateInstance (CLSID_CaptureGraphBuilder2 , NULL, CLSCTX_INPROC, IID_ICaptureGraphBuilder2, (void **) &(capture.p))); // Attach the existing filter graph to the capture graph GetDevice()->Handle(capture->SetFiltergraph(graphBuilder)); // Use the system device enumerator and class enumerator to find // a video capture/preview device, such as a desktop USB video camera. GetDevice()->Handle(FindCaptureDevice(&pSrcFilter)); // Add the returned capture filter to our graph. hr = graphBuilder->AddFilter(pSrcFilter, L"Video Capture"); if (FAILED(hr)) MessageBox(NULL, TEXT("Could not add the capture filter to the graph.\r\n\r\n") TEXT("Make sure that a video capture device is connected and functional\n") TEXT("and is not being used by another capture application.\0"), "JSurfaceVideo", MB_OK); return hr; // Render the preview pin on the video capture filter. // This will create and connect any necessary transform filters. // We pass a pointer to the IBaseFilter interface of our CTextureRenderer // video renderer, which will draw the incoming video onto a D3D surface. hr = capture->RenderStream (&PIN_CATEGORY_PREVIEW, &MEDIATYPE_Video, pSrcFilter, NULL, pRenderer); // I get VFW_S_NOPREVIEWPIN for both streams if (FAILED(hr)) MessageBox(NULL, (TEXT("Could not render the capture stream.\r\n\r\n") TEXT("Make sure that a video capture device is connected and functional\n") TEXT("and is not being used by another capture application.\0")), "JSurfaceVideo", MB_OK); return hr;
Page 219 of 274
return S_OK; HRESULT JSurfaceVideo::FindCaptureDevice(IBaseFilter ** ppSrcFilter) HRESULT hr = S_OK; CComPtr <IMoniker> pMoniker; CComPtr <ICreateDevEnum> pDevEnum; CComPtr <IEnumMoniker> pClassEnum; IBaseFilter * pSrc = NULL; ULONG cFetched; if (!ppSrcFilter) return E_POINTER; if (numEyes >= MAX_NUMBER_OF_EYES) MessageBox(NULL, TEXT("Too many capture devices - 17 maximum"), "JSurfaceVideo", MB_OK); return E_FAIL; // Create the system device enumerator GetDevice()->Handle(CoCreateInstance (CLSID_SystemDeviceEnum, NULL, CLSCTX_INPROC, IID_ICreateDevEnum, (void **) &pDevEnum)); // Create an enumerator for the video capture devices GetDevice()->Handle(pDevEnum->CreateClassEnumerator (CLSID_VideoInputDeviceCategory, &pClassEnum, 0)); // If there are no enumerators for the requested type, then // CreateClassEnumerator will succeed, but pClassEnum will be NULL. if (pClassEnum == NULL) MessageBox(NULL,TEXT("No video capture device was detected.\r\n\r\n") TEXT("This sample requires a video capture device, such as a USB WebCam,\r\n") TEXT("to be installed and working properly. The sample will now close."), TEXT("No Video Capture Hardware"), MB_OK | MB_ICONINFORMATION); return E_FAIL; // Use the first video capture device on the device list. // Note that if the Next() call succeeds but there are no monikers, // it will return S_FALSE (which is not a failure). Therefore, we // check that the return code is S_OK instead of using SUCCEEDED() macro. while (S_OK == (pClassEnum->Next (1, &pMoniker, &cFetched))) // Bind Moniker to a filter object IPropertyBag *pBag=0; hr = pMoniker->BindToStorage(0, 0, IID_IPropertyBag, (void **)&pBag); if(SUCCEEDED(hr)) VARIANT var; var.vt = VT_BSTR; hr = pBag->Read(L"DevicePath", &var, NULL); if (hr == S_OK) for (int index = 0; index < numEyes; index++) if (eye[index] == var.bstrVal) goto ALREADY_OBTAINED; HRESULT hr = pMoniker->BindToObject(0, 0, IID_IBaseFilter, (void**)&pSrc);
Page 220 of 274
GetDevice()->Handle(hr); *ppSrcFilter = pSrc; name = eye[numEyes++] = var.bstrVal; SysFreeString(var.bstrVal); // VERIFY(pMoniker->Release() == 1); return hr; ALREADY_OBTAINED:; // SysFreeString(var.bstrVal); pMoniker.Release(); /* else MessageBox(NULL, TEXT("Unable to access video capture device."), "JSurfaceVideo", MB_OK); return E_FAIL; */ // Copy the found filter pointer to the output parameter. // Do NOT Release() the reference, since it will still be used // by the calling function. MessageBox(NULL, TEXT("Unable to access video capture device."), "JSurfaceVideo", MB_OK); return E_FAIL; void JSurfaceVideo::OnInitSurface() data = (short*)realloc(data, sizeof(short) * COLOUR_CHANNELS * GetX() * GetY()); HRESULT JSurfaceVideo::CopyMediaSample( IMediaSample* pSample) HRESULT hr = S_OK; CAutoLock lock(&criticalSection); BYTE* pSampleBuffer = NULL; // D3DLOCKED_RECT d3dlr; // LONG lTexturePitch; // Pitch of texture if(!pSample) return E_POINTER; // Get the video bitmap buffer GetDevice()->Handle(pSample->GetPointer(&pSampleBuffer)); // Lock the Texture // if (Lock()) if (data) // Get the texture buffer & pitch // BYTE* pTextureBuffer = GetData(); LONG lTexturePitch = GetPitch(); /* if( m_TextureFormat == D3DFMT_A1R5G5B5 ) for(row = 0; row < m_Height; row++ ) BYTE *pBmpBufferOld = pTextureBuffer; BYTE *pTxtBufferOld = pSampleBuffer; for (col = 0; col < m_Width; col++)
Page 221 of 274
*(WORD *)pTextureBuffer = (WORD) (0x8000 + ((pSampleBuffer[2] & 0xF8) << 7) + ((pSampleBuffer[1] & 0xF8) << 2) + (pSampleBuffer[0] >> 3)); pTextureBuffer += 2; pSampleBuffer += 3; pSampleBuffer = pBmpBufferOld + lSamplePitch; pTextureBuffer = pTxtBufferOld + lTexturePitch; if( m_TextureFormat == D3DFMT_A8R8G8B8 ) */ int max = GetY(); int width = GetX(); short* red = Offset(0, 0, 0); uint offset = Offset(1, 0, 0) - data; // y-invert the image: pSampleBuffer += (max - 1) * pitch; for (int row = 0; row < max; row++ ) BYTE* pBmpBufferOld = pSampleBuffer; //BYTE *pTxtBufferOld = pTextureBuffer; short* redOld = red; // mirror red += width - 1; for (int col = 0; col < width; col++) *(red) = pSampleBuffer[0]; *(red+offset) = pSampleBuffer[1]; *(red+offset*2) = pSampleBuffer[2]; pSampleBuffer += 3; // mirror red--; red++; pSampleBuffer = pBmpBufferOld - pitch; red = redOld + width; //pTextureBuffer = pTxtBufferOld + lTexturePitch; // Unlock(); differentiations = 0; Parse(); OnUpdate(); return hr; //----------------------------------------------------------------------------- // Name: CheckMovieStatus // Desc: Wait for capture events like device removal //----------------------------------------------------------------------------- void JSurfaceVideo::CheckMovieStatus() long lEventCode; long lParam1;
Page 222 of 274
long lParam2; HRESULT hr = S_OK; if (!mediaEvent) return; // Check for completion events hr = mediaEvent->GetEvent(&lEventCode, (LONG_PTR *) &lParam1, (LONG_PTR *) &lParam2, 0); if (SUCCEEDED(hr)) // Free any memory associated with this event hr = mediaEvent->FreeEventParams(lEventCode, lParam1, lParam2); void JSurfaceVideo::CreateSurface(int width, int height, int setPitch) pitch = setPitch; LPDIRECTDRAWSURFACE7 surf; GetDevice()->CreateSurface(surf, width, height); SetSurface(surf); //----------------------------------------------------------------------------- // Running Object Table functions: Used to debug. By registering the graph // in the running object table, GraphEdit is able to connect to the running // graph. This code should be removed before the application is shipped in // order to avoid third parties from spying on your graph. //----------------------------------------------------------------------------- #ifdef REGISTER_FILTERGRAPH HRESULT JSurfaceVideo::AddToROT(IUnknown *pUnkGraph) HRESULT hr = S_OK; IMoniker * pmk = NULL; IRunningObjectTable *pROT = NULL; WCHAR wsz[256]; if (FAILED(GetRunningObjectTable(0, &pROT))) return E_FAIL; wsprintfW(wsz, L"FilterGraph %08x pid %08x\0", (DWORD_PTR) 0, GetCurrentProcessId()); hr = CreateItemMoniker(L"!", wsz, &pmk); if (SUCCEEDED(hr)) // Use the ROTFLAGS_REGISTRATIONKEEPSALIVE to ensure a strong reference // to the object. Using this flag will cause the object to remain // registered until it is explicitly revoked with the Revoke() method. // // Not using this flag means that if GraphEdit remotely connects // to this graph and then GraphEdit exits, this object registration // will be deleted, causing future attempts by GraphEdit to fail until // this application is restarted or until the graph is registered again. hr = pROT->Register(ROTFLAGS_REGISTRATIONKEEPSALIVE, pUnkGraph, pmk, &rotRegistration); pmk->Release(); pROT->Release(); return hr;
Page 223 of 274
void JSurfaceVideo::RemoveFromROT(void) IRunningObjectTable *pirot=0; if (SUCCEEDED(GetRunningObjectTable(0, &pirot))) pirot->Revoke(rotRegistration); pirot->Release(); #endif //#define HL_FILTER /* #define EDGE_THRESHHOLD 15 #define NUM_ITERATIONS 1 #define NUM_ITERATIONS_DIFF 2 #define NUM_ITERATIONS_2 0 #define MAX_VIDEO_OBJECTS (X_RES * Y_RES) #define C_DIM 3 #define Q_DIM C_DIM * 2 // x-diff and y-diff int stencil[9] = 0, 1, 0, 1, 16, 1, 0, 1, 0 ; int stencilWeight = 1; UINT colourMap[MAX_VIDEO_OBJECTS]; /* struct VideoObject int id; int id2; int x, y; // divide by size to get actual x and y int size; COLOUR colour; video[MAX_VIDEO_OBJECTS]; */ /* int videoMap[Y_RES * X_RES]; int queue[MAX_VIDEO_OBJECTS]; int red, green, blue, size; */ void JSurfaceVideo::Parse()
Page 224 of 274
// parse the image for objects // default implementation simply moves data to the surface Blur(3); DeContrast(128); /* CreateXY(); Differentiate(1); SumAbsolute(); Blur(1); */ // DifferentiateRelative(1, 16); // Blur(2, TRUE); // SumSquares(); // DotProductCutOff(); // CrossProductCutOff(); // Blur(2); Scale(256); // Scale(0.5f, TRUE); if (Lock()) BYTE* dest = GetData(); // uint offset = COLOURS * GetX() * GetY(); // use the hi byte BYTE* red = ((BYTE*)Offset(0, 0, 0)) + 1; BYTE* green = (BYTE*)Offset(1, 0, 0) + 1; BYTE* blue = (BYTE*)Offset(2, 0, 0) + 1; int maxX = MaxX(); int maxY = MaxY(); for (int yy = maxY; yy; yy--) for (int xx = 0; xx < maxX; xx++) #ifdef VIDEO_EDGE_CUT_OFF if (red[0] + green[0] + blue[0] > VIDEO_EDGE_CUT_OFF) dest[1] = 255; else dest[1] = 0; dest[0] = 0; dest[2] = 0; #else dest[0] = red[0]; dest[1] = green[0]; dest[2] = blue[0]; #endif dest[3] = 0xFF; dest += 4; red += 2; green += 2; blue += 2; Unlock();
Page 225 of 274
/* DeContrast removes lighting-bias from an image */ void JSurfaceVideo::DeContrast(int deviation) ASSERT(deviation); short* pos = data; short* end = &data[GetX() * GetY()]; uint offset = GetX() * GetY(); while (pos < end) WORD max = pos[offset * 2]; if (max < pos[offset]) max = pos[offset]; if (max < pos[0]) max = pos[0]; uint bias = ((255 + deviation) << 8) / (max + deviation); *pos = (*pos * bias) >> 8; pos[offset] = (pos[offset] * bias) >> 8; pos[offset * 2] = (pos[offset * 2] * bias) >> 8; pos++; void JSurfaceVideo::CrossProductCutOff() short* red = Offset(0, 0, 0); short* green = Offset(1, 0, 0); short* blue = Offset(2, 0, 0); int maxX = MaxX(); int maxY = MaxY(); uint offset = COLOURS * GetX() * GetY(); for (int yy = maxY; yy; yy--) for (int xx = maxX; xx; xx--) green[0] = (short)(abs((int)(red[0] * green[offset] - red[offset] * green[0])) + abs((int)(green[0] * blue[offset] - green[offset] * blue[0])) + abs((int)(blue[0] * red[offset] - blue[offset] * red[0]))); red[0] = 0; blue[0] = 0; red++; green++; blue++; void JSurfaceVideo::DotProductCutOff() short* red = Offset(0, 0, 0); short* green = Offset(1, 0, 0); short* blue = Offset(2, 0, 0); int maxX = MaxX(); int maxY = MaxY(); uint offset = COLOURS * GetX() * GetY();
Page 226 of 274
for (int yy = maxY; yy; yy--) for (int xx = maxX; xx; xx--) /* int rmag = red[0] * red[0] + red[offset] * red[offset]; int gmag = green[0] * green[0] + green[offset] * green[offset]; int bmag = blue[0] * blue[0] + blue[offset] * blue[offset]; int max = rmag; if (gmag > max) max = gmag; if (bmag > max) max = bmag; int min = rmag; if (gmag < min) min = gmag; if (bmag < min) min = bmag; */ green[0] = (BYTE)(abs((int)(red[0] * green[0])) + abs((int)(green[0] * blue[0])) + abs((int)(blue[0] * red[0])) + abs((int)(red[offset] * green[offset])) + abs((int)(green[offset] * blue[offset])) + abs((int)(blue[offset] * red[offset]))); red[0] = 0; blue[0] = 0; red++; green++; blue++; void JSurfaceVideo::BothProductsCutOff() short* red = Offset(0, 0, 0); short* green = Offset(1, 0, 0); short* blue = Offset(2, 0, 0); int maxX = MaxX(); int maxY = MaxY(); uint offset = COLOURS * GetX() * GetY(); for (int yy = maxY; yy; yy--) for (int xx = maxX; xx; xx--) green[0] = (BYTE)(abs((int)(red[0] * green[offset] - red[offset] * green[0])) + abs((int)(green[0] * blue[offset] - green[offset] * blue[0])) + abs((int)(blue[0] * red[offset] - blue[offset] * red[0]))) + (BYTE)(abs((int)(red[0] * red[offset])) + abs((int)(green[0] * green[offset])) + abs((int)(blue[0] * blue[offset]))); red[0] = 0; blue[0] = 0; red++; green++; blue++; void JSurfaceVideo::BlurMedian(int iterations, BOOL xy)
Page 227 of 274
int maxX = GetX() - 1; int maxY = GetY() - 1; int offset = GetX(); int colours = COLOURS; if (xy) colours *= 2; for (int index = 0; index < iterations; index++) short* pos = (short*)(data); for (int colour = 0; colour < colours; colour++) for (int xx = offset; xx; xx--) *(pos) = *(pos + offset); pos++; for (int yy = 1; yy < maxY; yy++) *(pos) = *(pos+1); pos++; for (int xx = 1; xx < maxX; xx++) // find median of 4 surrounding points: short list[4]; list[0] = pos[-1]; list[1] = pos[1]; list[2] = pos[-offset]; list[3] = pos[offset]; for (int item = 1; item < 4; item++) short swap = list[item]; for (int insertPos = 0; insertPos < item; insertPos++) if (list[insertPos] > list[item]) for (int mover = item; mover > insertPos; mover--) list[mover] = list[mover - 1]; list[insertPos] = swap; pos[0] = (list[1] + list[2]) >> 1; //pos[0] = (pos[-1] + pos[1] + pos[-offset] + pos[offset] /*+ (pos[0])*/) >> 2; pos++; *(pos) = *(pos-1); pos++; // pos += 2; // skip end of row, start of next row //pos += 2 * offset; // skip end row, start row
Page 228 of 274
for (int xx = offset; xx; xx--) *(pos) = *(pos - offset); pos++; void JSurfaceVideo::Blur(int iterations, BOOL xy) int maxX = GetX() - 1; int maxY = GetY() - 1; int offset = GetX(); int colours = COLOURS; if (xy) colours *= 2; short* row = (short*)malloc(sizeof(short) * offset); // memset(row, 0, sizeof(short) * offset); for (int index = 0; index < iterations; index++) short* pos = (short*)(data); for (int colour = 0; colour < colours; colour++) short* rowPos = row; for (int xx = offset; xx; xx--) rowPos[0] = pos[0]; pos[0] = pos[offset]; pos++; rowPos++; for (int yy = 1; yy < maxY; yy++) rowPos = row; short prev = rowPos[0]; rowPos[0] = pos[0]; pos[0] = pos[1]; pos++; rowPos++; for (int xx = 1; xx < maxX; xx++) short above = rowPos[0]; rowPos[0] = pos[0]; pos[0] = (rowPos[1] + prev + rowPos[-1] + pos[1] + above + pos[offset] + pos[offset - 1] + pos[offset + 1] /*+ (pos[0])*/) >> 3;//>> 2; pos++; rowPos++; prev = above; *(pos) = *(pos-1); pos++; // pos += 2; // skip end of row, start of next row //pos += 2 * offset; // skip end row, start row for (int xx = offset; xx; xx--) *(pos) = *(pos - offset); pos++;
Page 229 of 274
free(row); void JSurfaceVideo::CreateXY() short* pos = data; short* end = Offset(COLOURS, 0, 0); uint offset = end - pos; while (pos < end) pos[offset] = pos[0]; pos++; void JSurfaceVideo::Differentiate(int iterations) // process the image int maxX = MaxX(); int maxY = MaxY(); differentiations += iterations; // this affects the used surface size for (int index = 0; index < iterations; index++) short* pos = data; short* end = Offset(COLOURS, 0, 0); while (pos < end) *pos -= *(pos + 1); pos++; pos = Offset(COLOURS, 0, 0); end = Offset(COLOURS * 2, 0, 0); uint offset = GetX(); while (pos < end) *pos -= *(pos + 1); pos++; void JSurfaceVideo::DifferentiateRelative(int iterations, int deviation) // process the image int maxX = MaxX(); int maxY = MaxY(); differentiations += iterations; // this affects the used surface size for (int index = 0; index < iterations; index++) short* pos = data; short* end = Offset(COLOURS, 0, 0); while (pos < end) *pos = (short)((((int)(*(pos + 1) - (*pos))) * (256 + deviation)) / (abs(*pos) + deviation)); pos++;
Page 230 of 274
pos = Offset(COLOURS, 0, 0); end = Offset(COLOURS * 2, 0, 0); uint offset = GetX(); while (pos < end) *pos = (short)((((int)(*(pos + 1) - (*pos))) * (256 + deviation)) / (abs(*pos) + deviation)); pos++; void JSurfaceVideo::Scale(float factor, BOOL xy) short* pos = data; int colours = COLOURS; if (xy) colours *= 2; short* end = Offset(colours, 0, 0); while (pos < end) *pos = (int)(factor * (float)*pos); pos++; void JSurfaceVideo::SumAbsolute() short* pos = data; short* end = Offset(COLOURS, 0, 0); uint offset = end - pos; while (pos < end) short word = *pos; if (word < 0) *pos = -word; word = *(pos + offset); if (word < 0) *pos -= word; else *pos += word; pos++; void JSurfaceVideo::SumSquares() short* pos = data; short* end = Offset(COLOURS, 0, 0); uint offset = end - pos; while (pos < end) *pos = SQUARE(*pos) + SQUARE(*(pos + offset)); pos++; /* void JSurfaceVideo::EnhanceEdges() // enhance edges by clamping a pixel to the third lowest to third highest neighbour // consider zooming in with blur and then enhancing edges ASSERT(FALSE); // not written yet */
Page 231 of 274
// JCameraFOV.cpp #include "stdafx.h" #include "JCameraFOV.h" JCameraFOV::JCameraFOV(const Calibration* setCalibration) calibration = setCalibration->Clone(); JCameraFOV::~JCameraFOV() delete calibration; void JCameraFOV::OnDraw(JScene* scene, int maxPrimitives) int repeatX = calibration->GetX(); int repeatY = calibration->GetY(); JSurface3D* surface = scene->GetSurface(); LPDIRECT3DDEVICE7 device3d = surface->Get3DDevice(); D3DVECTOR n = D3DVECTOR(0, 1, 0); // normal // WE ALSO WANT TO DRAW CORNER LINES (AND THE CENTRE LINE) THROUGH THE CAMERA // vertical D3DVERTEX* vertex = (D3DVERTEX*)malloc(sizeof(D3DVERTEX) * (2 * ((repeatX - 1) * (repeatY) + (repeatY - 1) * (repeatX)) + 1)); D3DVERTEX* item = vertex; int FOV_STEP = 1; for (int i = 0; i < repeatX; i += FOV_STEP) D3DVECTOR v; ThreeD threeD; calibration->GetData(i, 0, threeD); threeD.GetD3DVECTOR(v); *(item++) = D3DVERTEX(v, n, 0, 0); for (int j = 1; j < repeatY; j += FOV_STEP) calibration->GetData(i, j, threeD); threeD.GetD3DVECTOR(v); *(item++) = D3DVERTEX(v, n, 0, 0); *(item++) = D3DVERTEX(v, n, 0, 0); item--; for (int j = 0; j < repeatY; j += FOV_STEP) D3DVECTOR v; ThreeD threeD; calibration->GetData(0, j, threeD); threeD.GetD3DVECTOR(v); *(item++) = D3DVERTEX(v, n, 0, 0); for (int i = 1; i < repeatX; i += FOV_STEP)
Page 232 of 274
calibration->GetData(i, j, threeD); threeD.GetD3DVECTOR(v); *(item++) = D3DVERTEX(v, n, 0, 0); *(item++) = D3DVERTEX(v, n, 0, 0); item--; int next = item - vertex; //(2 * ((repeatX - 1) * (repeatY) + (repeatY - 1) * (repeatX))); int skip = (D3DMAXNUMVERTICES / 2) * 2; int iterations = 0; D3DMATERIAL7 mtrl; D3DUtil_InitMaterial(mtrl, 0.0f, 0.0f, 0.0f, 0.0f); mtrl.emissive.g = 0.4f; mtrl.emissive.b = 0.4f; device3d->SetMaterial(&mtrl); while (iterations + skip < next) while (device3d->DrawPrimitive(D3DPT_LINELIST, D3DFVF_VERTEX, vertex + iterations, skip, NULL ) == DDERR_WASSTILLDRAWING) DO_STUFF; iterations += skip; if (next > iterations) ASSERT(((next - iterations) % 2) == 0); while (device3d->DrawPrimitive(D3DPT_LINELIST, D3DFVF_VERTEX, vertex + iterations, next - iterations, NULL ) == DDERR_WASSTILLDRAWING) DO_STUFF; // now draw the 4 corners and the centreline: vertex = (D3DVERTEX*)realloc(vertex, sizeof(D3DVERTEX) * 10); D3DVECTOR corner[6]; // corner 0 is the origin, 5 is the centre. ThreeD threeD; ThreeD temp; int margin = calibration->GetMargin(); threeD.GetD3DVECTOR(corner[0]); calibration->GetData(margin, margin, threeD); threeD.GetD3DVECTOR(corner[1]); calibration->GetData(calibration->GetX() - margin - 1, margin, threeD); threeD.GetD3DVECTOR(corner[2]); calibration->GetData(margin, calibration->GetY() - 1 - margin, threeD); threeD.GetD3DVECTOR(corner[3]); calibration->GetData(calibration->GetX() - margin - 1, calibration->GetY() - 1 - margin, threeD); threeD.GetD3DVECTOR(corner[4]); threeD.clear(); // this line was missing - causing the middle to be OFF! //* for (int i = 0; i < 2; i++) for (int j = 0; j < 2; j++)
Page 233 of 274
calibration->GetData(i + calibration->GetX() / 2, j + calibration->GetY() / 2, temp); threeD.add(&temp); // calibration->GetData(240, 180, threeD); threeD.multiply((THREED)0.25); // 0.25 would give 100%, 0.3 gives 120% length! threeD.GetD3DVECTOR(corner[5]); for (int i = 0; i < 5; i++) vertex[i * 2] = D3DVERTEX(corner[0], n, 0, 0); vertex[i * 2 + 1] = D3DVERTEX(corner[i + 1], n, 0, 0); //* mtrl.emissive.b = 0.0f; device3d->SetMaterial(&mtrl); while (device3d->DrawPrimitive(D3DPT_LINELIST, D3DFVF_VERTEX, vertex, 8, NULL ) == DDERR_WASSTILLDRAWING) DO_STUFF; mtrl.emissive.g = 0.0f; mtrl.emissive.r = 0.7f; device3d->SetMaterial(&mtrl); while (device3d->DrawPrimitive(D3DPT_LINELIST, D3DFVF_VERTEX, vertex + 8, 2, NULL ) == DDERR_WASSTILLDRAWING) DO_STUFF; //*/ free(vertex);
Page 234 of 274
Appendix F – Microcontroller source code (excerpt)
Main.c #include <mc9s12dp256.h> /* derivative information */ #include <stdio.h> /* sscanf */ #include "sci1.h" /* support for SCI1 */ #include "lcd.h" /* LCD_init(), writeLine() */ #include "pll.h" /* defines _BUSCLOCK, sets bus frequency to _BUSCLOCK MHz */ #include "timer.h" /* TOC7_Init, TOC7_ISR */ #include "porth.h" /* Encoder interrupt on port h */ #include "dac5311.h" /* DAC */ #include "charConv.h" /* Integer/float to character conversion stuff */ #include "comm.h" /* Communications interpretation */ #include "settings.h" /* Definitions file */ #include "pwm.h" /* PWM handler */ #include "display.h" /* Display handler */ /* Port H Bit 0 - Channel A Bit 1 - Channel B */ //static char velocityStr[8]=" . \0"; //static char positionStr[7]=" . \0"; void main(void) /* set system clock frequency to _BUSCLOCK MHz (24 or 4) */ PLL_Init(); /* Initialise LCD display */ LCD_init(); ///* Pointless introduction */ //writeLine("Serial Interface",0); //writeLine("--Initialising--",1); /* initialise serial communication interface SCI1 */ //SCI1_Init(BAUD_115200); // capped at 9600, if PLL inactive (4 MHz bus) SCI1_Init(BAUD_9600); /* Initialise the encoder interrupt on port H */ encoder_Init(); /* Initialise communications */ comm_Init(); /* Initialise the PWM */ PWM_Init(); /* Initialise the display */ display_Init(); /* Initialise the motor output */ motor_Init(); /* set port B as output (LEDs) */ DDRB = 0xff; // Port B0,1 is output
Page 235 of 274
/* activate timer TOC7 */ TOC7_Init(); /* allow all interrupts to happen */ asm cli #if TERMINAL SCI1_OutString("\n\rStarting Program\n\r"); #endif //PURGE BUFFER while(SCI1_InStatus()) SCI1_InChar(); /* forever */ for(;;) if(interruptCount>=183) interruptCount=0; HBLED ^= 1; // Toggle output bit (PB0) display_Update();
comm.h #ifndef _COMM_H_ #define _COMM_H_ extern void comm_Evaluate(void); extern void comm_Init(void); extern void comm_TimerUpdate(void); #endif
comm.c #include <mc9s12dp256.h> #include "sci1.h" #include "encoder.h" #include "settings.h" #include "motor.h" #if TERMINAL //Request is "RQ" #define REQUEST_FIRSTHALF 0x52 #define REQUEST_SECONDHALF 0x51 #else //Request is 0b0111111111111111
Page 236 of 274
//#define REQUEST_FIRSTHALF 127 //#define REQUEST_SECONDHALF 127 #define STARTONE 0x7f #define STARTTWO 0x7e #define ENDONE 0x7d #define ENDTWO 0x7c #endif unsigned char storedChar[STORED_SIZE]; short storedValue; int currentPosition; unsigned short timerCounter; byte commTimerEnable; byte retrievedByte; byte commState; /* 0 - Awaiting start command 1 1 - Awaiting start command 2 2 - Processing torques 3 - Awaiting end command 1 4 - Awaiting end command 2 5 - Ready to transmit */ //byte commReplied; /* 0 - Have not replied yet. 1 - Have replied */ //byte youHaveAnError; /* Resets and activates the timer */ void comm_TimerReset(void) timerCounter=0; commTimerEnable=1; /* Activated at timeout, resets the buffer and kills the timer */ void comm_Timeout(void) COMM_RECEIVING_LED = 0; timerCounter=0; commTimerEnable=0; /* Back to the start */ currentPosition=0; commState=0;
Page 237 of 274
/* Activated by timer, used to activate timeout if necessary */ void comm_TimerUpdate(void) if(commTimerEnable) timerCounter++; if(timerCounter>=TIMEOUT) comm_Timeout(); /* Transmits the required data after a request has been made */ void comm_Transmit(void) #if TERMINAL COMM_TRANSMIT_LED ^= 1; COMM_RECEIVING_LED = 0; SCI1_OutString("Request received, transmitting encoder values.\n\r"); SCI1_OutString("Encoder 1 value: "); SCI1_OutUDec((unsigned short)positionDeg[0]); SCI1_OutString(" degrees\n\r"); SCI1_OutString("Encoder 2 value: "); SCI1_OutUDec((unsigned short)positionDeg[1]); SCI1_OutString(" degrees\n\r"); SCI1_OutString("Encoder 3 value: "); SCI1_OutUDec((unsigned short)positionDeg[2]); SCI1_OutString(" degrees\n\r"); SCI1_OutString("Encoder 4 value: "); SCI1_OutUDec((unsigned short)positionDeg[3]); SCI1_OutString(" degrees\n\r"); SCI1_OutString("Encoder 5 value: "); SCI1_OutUDec((unsigned short)positionDeg[4]); SCI1_OutString(" degrees\n\r"); SCI1_OutString("Encoder 6 value: "); SCI1_OutUDec((unsigned short)positionDeg[5]); SCI1_OutString(" degrees\n\r"); #else //Sending encoder values down the line int i=0; COMM_TRANSMIT_LED ^= 1; COMM_RECEIVING_LED = 0; SCI1_OutChar(STARTONE); SCI1_OutChar(STARTTWO); for(i;i<NUM_TOTAL_ENCODERS;i++) SCI1_Out16(encoderChange[i]); SCI1_OutChar(ENDONE); SCI1_OutChar(ENDTWO);
Page 238 of 274
#endif /* Evaluates the data received so far */ void comm_Evaluate(void) int i=0; COMM_RECEIVING_LED = 1; /* while there is remaining space in the storage and the buffer contains characters, the character in the buffer is placed in storage, and we move to the next slot in the storage space */ /* while(SCI1_InStatus()&¤tPosition!=STORED_SIZE) storedChar[currentPosition]=SCI1_InChar(); currentPosition++; */ //Retrieving Characters while(SCI1_InStatus()) switch(commState) case 0: retrievedByte=SCI1_InChar(); if(retrievedByte==STARTONE) /* commReplied=0; */ commState=1; /* Reset timeout timer */ comm_TimerReset(); else /* //ERROR if(!commReplied) commState=5; youHaveAnError=1; */ break; case 1: retrievedByte=SCI1_InChar(); if(retrievedByte==STARTTWO) commState=2; /* Reset timeout timer */ comm_TimerReset(); else /* //ERROR if(!commReplied) commState=5;
Page 239 of 274
youHaveAnError=1; */ break; case 2: storedChar[currentPosition]=SCI1_InChar(); currentPosition++; /* Reset timeout timer */ comm_TimerReset(); if(currentPosition==STORED_SIZE) commState=3; break; case 3: retrievedByte=SCI1_InChar(); if(retrievedByte==ENDONE) commState=4; /* Reset timeout timer */ comm_TimerReset(); else /* //ERROR if(!commReplied) commState=5; youHaveAnError=1; */ break; case 4: retrievedByte=SCI1_InChar(); if(retrievedByte==ENDTWO) commState=5; /* Reset timeout timer */ comm_TimerReset(); else /* //ERROR if(!commReplied) commState=5; youHaveAnError=1; */ break; /* Stored value is full */
Page 240 of 274
if(currentPosition==STORED_SIZE) COMM_UPDATING_LED ^= 1; COMM_RECEIVING_LED = 0; #if TERMINAL //Spamming serial connection SCI1_OutString("Received "); SCI1_OutUDec(STORED_SIZE); SCI1_OutString(" bytes, now processing.\n\r"); #else //Sending torques, positions to the motors for(i=0;i<STORED_SIZE;i+=2) storedValue =0; storedValue|=(storedChar[i]<<8); storedValue|=storedChar[i+1]; motor_SetOutput(i/2,storedValue); #endif timerCounter=0; commTimerEnable=0; currentPosition=0; if(commState==5) //THIS IS A BAD IDEA, GET RID OF IT comm_Transmit(); // Reset the changes in the encoder values porth_ResetChange(); // Reset the storage index currentPosition=0; commTimerEnable=0; timerCounter=0; commState=0; //commReplied=1; /*//Test for request if(currentPosition==2) if((storedChar[0]==REQUEST_FIRSTHALF)&&(storedChar[1]==REQUEST_SECONDHALF)) //Received request // Begin Transmission comm_Transmit(); // Reset the changes in the encoder values porth_ResetChange(); // Reset the storage index currentPosition=0; commTimerEnable=0; timerCounter=0;
Page 241 of 274
*/ void comm_Init(void) timerCounter=0; commTimerEnable=0; currentPosition=0; storedValue=0; commState=0; //commReplied=0; //youHaveAnError=0; /* //GET RID OF THIS comm_Transmit(); // Reset the changes in the encoder values porth_ResetChange(); // Reset the storage index currentPosition=0; commTimerEnable=0; timerCounter=0; */
encoder.h #ifndef _ENCODER_H_ #define _ENCODER_H_ extern short encoderChange[]; extern void porth_ResetChange(void); #endif _ENCODER_H_
encoder.c #include "portj.h" #include "porth.h" #include "settings.h" short encoderChange[NUM_TOTAL_ENCODERS]; void porth_ResetChange(void) int i=0;
Page 242 of 274
//display_setValueInt(0,encoderChange[0]); //display_setValueInt(1,encoderChange[1]); //UGLY FIX ====> for(i;i<NUM_TOTAL_ENCODERS;i++) encoderChange[i]=0; void encoder_Init(void) int i=0; porth_Init(); portj_Init(); //UGLY FIX ====> for(i;i<NUM_TOTAL_ENCODERS;i++) encoderChange[i]=0;
isr_vectors.h #include "timer.h" /* TOC7_ISR */ #include "porth.h" #include "portj.h" extern void near _Startup(void); /* Startup routine */ /* declarations of interrupt service routines */ extern __interrupt void SCI1_isr(void); #pragma CODE_SEG __NEAR_SEG NON_BANKED /* Interrupt section for this module. Placement will be in NON_BANKED area. */ __interrupt void UnimplementedISR(void) /* Unimplemented ISRs trap.*/ asm BGND; typedef void (*near tIsrFunc)(void); const tIsrFunc _vect[] @0xFF80 = /* Interrupt table */ UnimplementedISR, /* vector 0x40 */ UnimplementedISR, /* vector 0x3F */ UnimplementedISR, /* vector 0x3E */ UnimplementedISR, /* vector 0x3D */ UnimplementedISR, /* vector 0x3C */ UnimplementedISR, /* vector 0x3B */ UnimplementedISR, /* vector 0x3A */ UnimplementedISR, /* vector 0x39 */ UnimplementedISR, /* vector 0x38 */ UnimplementedISR, /* vector 0x37 */ UnimplementedISR, /* vector 0x36 */ UnimplementedISR, /* vector 0x35 */ UnimplementedISR, /* vector 0x34 */ UnimplementedISR, /* vector 0x33 */ UnimplementedISR, /* vector 0x32 */ UnimplementedISR, /* vector 0x31 */ UnimplementedISR, /* vector 0x30 */
Page 243 of 274
UnimplementedISR, /* vector 0x2F */ UnimplementedISR, /* vector 0x2E */ UnimplementedISR, /* vector 0x2D */ UnimplementedISR, /* vector 0x2C */ UnimplementedISR, /* vector 0x2B */ UnimplementedISR, /* vector 0x2A */ UnimplementedISR, /* vector 0x29 */ UnimplementedISR, /* vector 0x28 */ UnimplementedISR, /* vector 0x27 */ UnimplementedISR, /* vector 0x26 */ UnimplementedISR, /* vector 0x25 */ UnimplementedISR, /* vector 0x24 */ UnimplementedISR, /* vector 0x23 */ UnimplementedISR, /* vector 0x22 */ UnimplementedISR, /* vector 0x21 */ UnimplementedISR, /* vector 0x20 */ UnimplementedISR, /* vector 0x1F */ UnimplementedISR, /* vector 0x1D */ UnimplementedISR, /* vector 0x1C */ UnimplementedISR, /* vector 0x1B */ UnimplementedISR, /* vector 0x1A */ porth_ISR, /* vector 0x19 (PORT H) */ portj_ISR, /* vector 0x18 (PORT J) */ UnimplementedISR, /* vector 0x17 (ATD1) */ UnimplementedISR, /* vector 0x16 (ATD0) */ SCI1_isr, /* vector 0x15 (SCI1) */ UnimplementedISR, /* vector 0x14 (SCI0) */ UnimplementedISR, /* vector 0x13 */ UnimplementedISR, /* vector 0x12 */ UnimplementedISR, /* vector 0x11 */ UnimplementedISR, /* vector 0x10 (TOF, timer overflow interrupt) */ TOC7_ISR, /* vector 0x0F (C7I, timer interrupt channel 7) */ UnimplementedISR, /* vector 0x0E (C6I, timer interrupt channel 6) */ UnimplementedISR, /* vector 0x0C (C5I, timer interrupt channel 5) */ UnimplementedISR, /* vector 0x0C (C4I, timer interrupt channel 4) */ UnimplementedISR, /* vector 0x0B (C3I, timer interrupt channel 3) */ UnimplementedISR, /* vector 0x0A (C2I, timer interrupt channel 2) */ UnimplementedISR, /* vector 0x09 (C1I, timer interrupt channel 1) */ UnimplementedISR, /* vector 0x08 (C0I, timer interrupt channel 0) */ UnimplementedISR, /* vector 0x07 (RTIE) */ UnimplementedISR, /* vector 0x06 */ UnimplementedISR, /* vector 0x05 */ UnimplementedISR, /* vector 0x04 */ UnimplementedISR, /* vector 0x03 */ UnimplementedISR, /* vector 0x02 */ UnimplementedISR, /* vector 0x01 */ _Startup /* vector 0x00 (RESET) */ ;
motor.h #ifndef _MOTOR_H_ #define _MOTOR_H_ extern void motor_SetOutput(unsigned char motor, short torque); extern void motor_Init(void);
Page 244 of 274
#endif
motor.c #include <mc9s12dp256.h> #include "pwm.h" #include "settings.h" #include "display.h" void motor_SetOutput(unsigned char motor, short value) unsigned char magnitude; //Display on screen if(motor==4) display_setValueInt(0,value); //Display on screen if(motor==3) display_setValueInt(1,value); //If it's a torque driven motor if(motor<NUM_CHANNELS) //Turn on or off the relevent direction bit if(value<0) PTT&=~(1<<motor); magnitude=(unsigned char)(-value); PWM_SetDuty(motor,magnitude); else PTT|=1<<motor; magnitude=(unsigned char)value; PWM_SetDuty(motor,magnitude); //If it's a servo driven motor else //display_setValueInt(1,(unsigned int)value); if(value<0) value=0; if(value>SERVORANGE) value=SERVORANGE;
Page 245 of 274
PWM_SetServoPos((unsigned int)value); void motor_Init() DDRT=0xFF;
port.h #ifndef _PORTH_H_ #define _PORTH_H_ extern void porth_Init(void); /* declare interrupt service routine porth_ISR */ extern __interrupt void porth_ISR(void); #endif _PORTH_H_
port.c /* Multiple encoder reader on port h 2005 Author: Ryan T Harrison This program allows for multiple encoders to be placed on port h. The encoders should be connected: Pin # 0 Encoder 1 line A 1 Encoder 1 line B 2 Encoder 2 line A 4 Encoder 2 line B ...etc The change in encoder values is stored in encoderChange[]; This variable can be reset by calling porth_ResetChange(); */ #include <mc9s12dp256.h> /* derivative information */ #include "sci1.h" #include "settings.h" #include "display.h" #include "encoder.h"
Page 246 of 274
void porth_Init(void) //Port H is input DDRH = 0x00; //We are initially detecting falling edges (trivial) PPSH = 0x00; //FOUR ENCODERS (ALL EIGHT LINES) PIEH |= 0xFF; __interrupt void porth_ISR(void) /* consider moving the following values to nearer PIFH clearing */ //Backup micro state int pifh_current=PIFH; int ptih_current=PTIH; int ppsh_current=PPSH; //Clear the interrupt and toggle the rise and fall on it. PPSH ^= pifh_current; //Toggle whether we are tracking the rising or falling edge for current interrupt PIFH |= pifh_current; //Clear the interrupt flags (by writing 1 to them) //toggle on port B bit 3 //PORTH_INTERRUPT_LED ^= 1; // Toggle output bit (PB0) //====== The following are all seperate interrupts which must be disabled ====== SCI1CR2 &= 0b01011111; //Disable receive and transmit interrupts TIE &= 0x7F; //Disable timer interrupt, enable interrupt masking to repeat H if necessary IBCR &= ~(0x40); // Disable interrupts on IIC // Enable interrupts // The only interrupt which can occur at this point // is the porth interrupt, which is recursive and // loads relevant data at its point onto the stack. asm cli int activatedIndex=0; byte currentLine=0; byte otherLine=0; byte otherValue=0; byte activatedLine=0; activatedLine=pifh_current; //Loop for multiple lines while(activatedLine>0) //Find the next activated line if not the current one. while(!(activatedLine&0b00000011)) activatedIndex++; activatedLine=activatedLine>>2; //The line we are currently examining currentLine=pifh_current&(0b00000011<<(activatedIndex*2)); //If the interrupt is on an odd line, the other line is the one to the left //If the interrupt is on an even line, the other line is to the right if(activatedLine&0b01010101) otherLine=currentLine<<1; else otherLine=currentLine>>1;
Page 247 of 274
//The value of the other line otherValue=otherLine&ptih_current; //If we are detecting rising edges on the current interrupt if(currentLine&ppsh_current) //If we are on line A if(otherLine>currentLine) //If the other value is high if(otherValue) encoderChange[activatedIndex]--; else encoderChange[activatedIndex]++; else //If the other value is high if(otherValue) encoderChange[activatedIndex]++; else encoderChange[activatedIndex]--; else //If we are on line A if(otherLine>currentLine) //If the other value is high if(otherValue) encoderChange[activatedIndex]++; else encoderChange[activatedIndex]--; else //If the other value is high if(otherValue) encoderChange[activatedIndex]--; else encoderChange[activatedIndex]++; activatedIndex++; activatedLine=activatedLine>>2; //====== The following are all seperate interrupts which must be reenabled ====== SCI1CR2 = sci1cr2_state; //Restore receive and transmit interrupts (variable from SCI1.h) IBCR |= 0x40; // Enable interrupts on IIC TIE |= 0x80; //Enable timer interrupt //toggle on port B bit 3 //PORTH_INTERRUPT_LED ^= 1; // Toggle output bit (PB0)
Page 248 of 274
pwm.h /**********************************************************************************/ /* Header file PWM.h */ /**********************************************************************************/ #ifndef _PWM_H_ #define _PWM_H_ /* declare public functions */ extern void PWM_Init(void); extern void PWM_SetDutyFloat(unsigned char channel, float newDuty); extern void PWM_SetDuty(unsigned char channel, unsigned char newDuty); extern void PWM_SetServoPosFloat(float newDuty); extern void PWM_SetServoPos(unsigned int newDuty); #endif _PWM_H_
pwm.c /* pwm.c -- Pulse Width Modulation on the 9S12 -- fw-03-05 */ /* */ /* improved version : uses PWMSCL before PCK */ /* -> better resolution (PWMSCL can be chosen in */ /* steps of '2', i.e. '2', '4', '6', ... '512' */ #include <mc9s12dp256.h> /* derivative information */ #include "settings.h" #define SERVOMIN 864 #define SERVOMAX 3181 #define SERVORANGE (SERVOMAX-SERVOMIN) //******** PWM_SetPeriod *************** // Set period register of the chosen PWM channel void PWM_SetPeriod(unsigned char channel, unsigned char period) switch(channel) case 0: PWMPER0 = period; break; case 1: PWMPER1 = period; break; case 2: PWMPER2 = period; break; case 3: PWMPER3 = period; break; case 4: PWMPER4 = period; break; case 5: PWMPER5 = period; break;
Page 249 of 274
//******** PWM_SetDutyFloat *************** // Set duty cycle register of the chosen PWM channel void PWM_SetDutyFloat(unsigned char channel, float newDuty) // work out integer duty cycle unsigned int duty = (unsigned int)(newDuty*255); switch(channel) case 0: PWMDTY0 = (char)duty; break; case 1: PWMDTY1 = (char)duty; break; case 2: PWMDTY2 = (char)duty; break; case 3: PWMDTY3 = (char)duty; break; case 4: PWMDTY4 = (char)duty; break; case 5: PWMDTY5 = (char)duty; break; //********* PWM_SetServoPosFloat ******** // Sets the duty cycle of the servo channel // based on a floating point position // where 0 corresponds to -90degrees // and 1 corresponds to 90degrees void PWM_SetServoPosFloat(float newDuty) unsigned int duty=(unsigned int)(newDuty*SERVORANGE)+SERVOMIN; PWMDTY67 = duty; //********* PWM_SetServoPos ******** // Sets the duty cycle of the servo channel // based on an input integer position, where // 0 corresponds to -90degrees and // SERVORANGE corresponds to 90degrees void PWM_SetServoPos(unsigned int newDuty) unsigned int duty=newDuty+SERVOMIN; PWMDTY67 = duty; //******** PWM_SetDuty *************** // Set duty cycle register of the chosen PWM channel void PWM_SetDuty(unsigned char channel, unsigned char newDuty) switch(channel) case 0: PWMDTY0 = newDuty; break; case 1: PWMDTY1 = newDuty; break; case 2:
Page 250 of 274
PWMDTY2 = newDuty; break; case 3: PWMDTY3 = newDuty; break; case 4: PWMDTY4 = newDuty; break; case 5: PWMDTY5 = newDuty; break; //******** PWM_Init *************** // Initialize PWM unit void PWM_Init() int i; // Set PWM initial polarity bits to '1' PWMPOL = (1<<NUM_CHANNELS)-1; // All channels use A,B clocks. PWMCLK = 0x00; // Set PWM clock A and B prescaler to 16, rate is 1.5MHz PWMPRCLK = 0x44; // Set periods to maximum for max resolution // Creates period of 0.00017s, 5.88kHz // Set duty cycle in incremental values from 0 to 1 for(i=0;i<NUM_CHANNELS;i++) PWM_SetPeriod(i,255); PWM_SetDutyFloat(i,((float)i/(float)(NUM_CHANNELS-1))); PWM_SetDutyFloat(i,0); // enable PWM on chosen channels PWME = (1<<NUM_CHANNELS)-1; // concatenate channels 6 and 7 PWMCTL |= 0x80; // polarity is 1 PWMPOL |= 0xC0; // Period is 30000 ticks, or 0.02s PWMPER67 = 30000; //Position is -90degrees PWMDTY67 = SERVOMIN; PWME |= 0xC0;
sci1.h // filename ******************* sci1.h ************************** // Jonathan W. Valvano 1/29/04
Page 251 of 274
// This example accompanies the books // "Embedded Microcomputer Systems: Real Time Interfacing", // Brooks-Cole, copyright (c) 2000, // "Introduction to Embedded Microcomputer Systems: // Motorola 6811 and 6812 Simulation", Brooks-Cole, copyright (c) 2002 // Copyright 2004 by Jonathan W. Valvano, [email protected] // You may use, edit, run or distribute this file // as long as the above copyright notice remains // Modified by EE345L students Charlie Gough && Matt Hawk // Modified by EE345M students Agustinus Darmawan + Mingjie Qiu // // adapted to the Dragon12 board using SCI1 -- fw-07-04 // define labels for baudrates // (necessary 'coz 115200 isn't a 16-bit number anymore -- fw-08-04) #define BAUD_300 0 #define BAUD_600 1 #define BAUD_1200 2 #define BAUD_2400 3 #define BAUD_4800 4 #define BAUD_9600 5 #define BAUD_19200 6 #define BAUD_38400 7 #define BAUD_57600 8 #define BAUD_115200 9 // standard ASCII symbols #define CR 0x0D #define LF 0x0A #define BS 0x08 #define ESC 0x1B #define SP 0x20 #define DEL 0x7F extern unsigned char sci1cr2_state; //-------------------------SCI1_Init------------------------ // Initialize Serial port SCI1 // Input: baudRate is tha baud rate in bits/sec // Output: none extern void SCI1_Init(unsigned short baudRate); //-------------------------SCI1_InStatus-------------------------- // Checks if new input is ready, TRUE if new input is ready // Input: none // Output: TRUE if a call to InChar will return right away with data // FALSE if a call to InChar will wait for input extern char SCI1_InStatus(void); //-------------------------SCI1_InChar------------------------ // Wait for new serial port input, busy-waiting synchronization // Input: none // Output: ASCII code for key typed extern char SCI1_InChar(void); extern void SCI1_InString(char *, unsigned short); // Reads in a String of max length //----------------------SCI1_InUDec------------------------------- // InUDec accepts ASCII input in unsigned decimal format // and converts to a 16 bit unsigned number // valid range is 0 to 65535 // Input: none // Output: 16-bit unsigned number // If you enter a number above 65535, it will truncate without an error // Backspace will remove last digit typed extern unsigned short SCI1_InUDec(void);
Page 252 of 274
//----------------------SCI1_InULDec------------------------------- // InUDec accepts ASCII input in unsigned decimal format // and converts to a 32 bit unsigned number // valid range is 0 to 4294967296 // Input: none // Output: 32-bit unsigned number // If you enter a number above 4294967296, it will truncate without an error // Backspace will remove last digit typed extern unsigned long SCI1_InULDec(void); //----------------------SCI1_InSDec------------------------------- // InSDec accepts ASCII input in signed decimal format // and converts to a 16 bit signed number // valid range is -32768 to +32767 // Input: none // Output: 16-bit signed number // If you enter a number outside +/-32767, it will truncate without an error // Backspace will remove last digit typed extern signed int SCI1_InSDec(void); //----------------------SCI1_InSLDec------------------------------- // InSLDec accepts ASCII input in signed decimal format // and converts to a 32 bit signed number // valid range is -2,147,483,648 to +2,147,483,647 // Input: none // Output: 32-bit signed number // If you enter a number outside +/-2147483648, it will truncate without an error // Backspace will remove last digit typed extern signed long SCI1_InSLDec(void); //---------------------SCI1_InUHex---------------------------------------- // Accepts ASCII input in unsigned hexadecimal (base 16) format // Input: none // Output: 16-bit unsigned number // No '$' or '0x' need be entered, just the 1 to 4 hex digits // It will convert lower case a-f to uppercase A-F // and converts to a 16 bit unsigned number // value range is 0 to FFFF // If you enter a number above FFFF, it will truncate without an error // Backspace will remove last digit typed extern unsigned short SCI1_InUHex(void); //-----------------------SCI1_OutStatus---------------------------- // Checks if output data buffer is empty, TRUE if empty // Input: none // Output: TRUE if a call to OutChar will output and return right away // FALSE if a call to OutChar will wait for output to be ready extern char SCI1_OutStatus(void); //-------------------------SCI1_OutChar------------------------ // Wait for buffer to be empty, output 8-bit to serial port // busy-waiting synchronization // Input: 8-bit data to be transferred // Output: none extern void SCI1_OutChar(char); //-----------------------SCI1_OutUDec----------------------- // Output a 16-bit number in unsigned decimal format // Input: 16-bit number to be transferred // Output: none // Variable format 1-5 digits with no space before or after extern void SCI1_OutUDec(unsigned short); //-------------------------SCI1_OutString------------------------ // Output String (NULL termination), busy-waiting synchronization // Input: pointer to a NULL-terminated string to be transferred // Output: none extern void SCI1_OutString(char *pt); //--------------------------SCI1_OutUHex---------------------------- // Output a 16 bit number in unsigned hexadecimal format
Page 253 of 274
// Input: 16-bit number to be transferred // Output: none // Variable format 1 to 4 digits with no space before or after extern void SCI1_OutUHex(unsigned short); // Outputs a 16 bit number by sending the most significant byte // followed by the least significant byte. extern void SCI1_Out16(short);
sci1.c /* ***************************** sci1.c ****************************** ** ** This module implements interrupt driven background communications ** using SCI1; a single interrupt service routine is used to service ** both incoming as well as outgoing data streams. ** ** fw-02-05 */ /* * Modifications by Ryan Harrison * * The program has been modified to allow maskable interrupts, * specifically to allow the port H and J interrupts * to frequently interrupt the serial interrupt. * */ #include <mc9s12dp256.h> /* derivative information */ #include <string.h> /* strlen() */ #include "sci1.h" #include "pll.h" /* macro _SYSCLOCK */ #include "rb_.h" /* ring buffer macros */ #include "comm.h" #include "settings.h" #define MAX_BUFLEN 128 static char outbuf[2*MAX_BUFLEN]; /* memory for ring buffer #1 (TXD) */ static char inbuf [2*MAX_BUFLEN]; /* memory for ring buffer #2 (RXD) */ byte sci1cr2_state; byte sendCount; /* define o/p and i/p ring buffer control structures */ static RB_CREATE(out, char); /* static struct ... out; -> global to this file */ static RB_CREATE(in, char); /* static struct ... in; -> global to this file */ /* ** ------------------------------------------------------------------------- ** interrupt handler ** ------------------------------------------------------------------------- */ #define RDRF 0x20 // Receive Data Register Full Bit #define TDRE 0x80 // Transmit Data Register Empty Bit __interrupt void SCI1_isr(void) SCI1CR2 &= 0b01011111; //Disable receive and transmit interrupts TIE &= 0x7F; //Disable timer interrupt (to prevent recursion)
Page 254 of 274
//toggle on port B bit 5 SCI_INTERRUPT_LED ^= 1; //Enable other interrupts asm cli asm sei /* determine cause of interrupt */ if((SCI1SR1 & RDRF) != 0) /* Receive Data Register Full -> fetch character and store */ if(!RB_FULL(&in)) *RB_PUSHSLOT(&in) = SCI1DRL; /* store the value of SCI1DRL in the ring buffer */ RB_PUSHADVANCE(&in); /* next write location */ asm cli //Do stuff based on the received data comm_Evaluate(); SCI_RECEIVED_LED ^= 1; asm sei else if((SCI1SR1 & TDRE) != 0) /* Transmission Data Register Empty -> send... */ if(!RB_EMPTY(&out)) SCI1DRL = *RB_POPSLOT(&out); /* start transmission of next character */ RB_POPADVANCE(&out); /* remove the sent character from the ring buffer */ //SCI_TRANSMIT_TOGGLE ^= 1; sendCount++; sendCount=sendCount%6; else sci1cr2_state &= ~0x80; /* buffer empty -> disable TX interrupt */ /* ... otherwise the system 'hangs' (continous interrupts) */ SCI_TRANSMIT_LED=0; /* Turn off TX light */ asm cli //toggle on port B bit 5 SCI_INTERRUPT_LED ^= 1; asm sei SCI1CR2 = sci1cr2_state; //Re-enable the necessary interrupts TIE |= 0x80; //Enable timer interrupt /* SCI1_isr */ /* ** ------------------------------------------------------------------------- ** communication interface
Page 255 of 274
** ------------------------------------------------------------------------- */ /* O/P : send single character */ //-------------------------SCI1_OutChar------------------------ // Wait for buffer to be empty, output 8-bit to serial port // busy-waiting synchronization // Input: 8-bit data to be transferred // Output: none void SCI1_OutChar(char data) while(RB_FULL(&out)); /* wait until there's space in the ring buffer */ *RB_PUSHSLOT(&out) = data; /* place character to be sent in the buffer */ RB_PUSHADVANCE(&out); /* set write position for the next character to be sent */ SCI_TRANSMIT_LED=1; /* Turn on TX light */ sci1cr2_state |= 0x80; SCI1CR2 = sci1cr2_state; /* (re-)enable interrupt */ /* SCI1_OutChar */ /* O/P : send entire string */ void SCI1_OutString(char *pt) while(*pt) SCI1_OutChar(*pt); pt++; /* SCI1_OutString */ /* I/P : get single character */ char SCI1_InChar(void) char c; while(RB_EMPTY(&in)); /* wait until there's data in the ring buffer */ c = *RB_POPSLOT(&in); /* get character off the buffer */ RB_POPADVANCE(&in); /* set write position to the next free slot */ return c; /* SCI1_InChar */ //-------------------------SCI1_Init------------------------ // Initialize Serial port SCI1 // Input: baudRate is tha baud rate in bits/sec // Output: none void SCI1_Init(unsigned short baudRate) //SCI_TRANSMIT_TOGGLE = 0; sendCount=0; /* set-up input and output ring buffers */ RB_INIT(&out, outbuf, 255); /* set up TX ring buffer */ RB_INIT(&in, inbuf, 255); /* set up RX ring buffer */ /* check if bus frequency has been boosted to 24 MHz (fw-07-04) */ #if _BUSCLOCK == 24 /* 24 MHz bus frequency (PLL is used, SYNR = 2, REFDV = 0 -> factor 6)
Page 256 of 274
Baud rate generator: SCI1BDL/H = (24e6/16)/baudrate = 1.5e6/baudrate */ switch(baudRate) case BAUD_300: SCI1BDH=19; SCI1BDL=136; break; case BAUD_600: SCI1BDH=9; SCI1BDL=196; break; case BAUD_1200: SCI1BDH=4; SCI1BDL=226; break; case BAUD_2400: SCI1BDH=2; SCI1BDL=113; break; case BAUD_4800: SCI1BDH=1; SCI1BDL=56; break; case BAUD_9600: SCI1BDH=0; SCI1BDL=156; break; case BAUD_19200: SCI1BDH=0; SCI1BDL=78; break; case BAUD_38400: SCI1BDH=0; SCI1BDL=39; break; case BAUD_57600: SCI1BDH=0; SCI1BDL=26; break; case BAUD_115200: SCI1BDH=0; SCI1BDL=13; break; #else /* 4 MHz bus frequency (PLL not used, SYNR = REFDV = 0 -> factor 2) Baud rate generator: SCI1BDL/H = (4e6/16)/baudrate = 250000/baudrate */ switch(baudRate) case BAUD_300: SCI1BDH=3; SCI1BDL=64; break; case BAUD_600: SCI1BDH=1; SCI1BDL=160; break; case BAUD_1200: SCI1BDH=0; SCI1BDL=208; break; case BAUD_2400: SCI1BDH=0; SCI1BDL=104; break; case BAUD_4800: SCI1BDH=0; SCI1BDL=52; break; case BAUD_9600: SCI1BDH=0;
Page 257 of 274
SCI1BDL=26; break; case BAUD_19200: SCI1BDH=0; SCI1BDL=13; break; #endif /* _BUSCLOCK */ SCI1CR1 = 0; /* bit value meaning 7 0 LOOPS, no looping, normal 6 0 WOMS, normal high/low outputs 5 0 RSRC, not appliable with LOOPS=0 4 0 M, 1 start, 8 data, 1 stop 3 0 WAKE, wake by idle (not applicable) 2 0 ILT, short idle time (not applicable) 1 0 PE, no parity 0 0 PT, parity type (not applicable with PE=0) */ sci1cr2_state = 0xAC; SCI1CR2 = sci1cr2_state; /* enable both RX and TX interrupts */ /* bit value meaning 7 0 TIE, transmit interrupts on TDRE 6 0 TCIE, no transmit interrupts on TC 5 1 RIE, receive interrupts on RDRF 4 0 ILIE, no interrupts on idle 3 1 TE, enable transmitter 2 1 RE, enable receiver 1 0 RWU, no receiver wakeup 0 0 SBK, no send break */ //-------------------------SCI1_InStatus-------------------------- // Checks if new input is ready, TRUE if new input is ready // Input: none // Output: TRUE if a call to InChar will return right away with data // FALSE if a call to InChar will wait for input char SCI1_InStatus(void) //return(SCI1SR1 & RDRF); return !RB_EMPTY(&in); //-----------------------SCI1_OutStatus---------------------------- // Checks if output data buffer is empty, TRUE if empty // Input: none // Output: TRUE if a call to OutChar will output and return right away // FALSE if a call to OutChar will wait for output to be ready char SCI1_OutStatus(void) return(SCI1SR1 & TDRE); //------------------------SCI1_InString------------------------ // This function accepts ASCII characters from the serial port // and adds them to a string until a carriage return is inputted // or until max length of the string is reached. // It echoes each character as it is inputted. // If a backspace is inputted, the string is modified // and the backspace is echoed // InString terminates the string with a null character // -- Modified by Agustinus Darmawan + Mingjie Qiu --
Page 258 of 274
void SCI1_InString(char *string, unsigned short max) int length=0; char character; character = SCI1_InChar(); while(character!=CR) if(character==BS) if(length) string--; length--; SCI1_OutChar(BS); else if(length<max) *string++=character; length++; SCI1_OutChar(character); character = SCI1_InChar(); *string = 0; //#ifdef ERASE //----------------------SCI1_InUDec------------------------------- // InUDec accepts ASCII input in unsigned decimal format // and converts to a 16 bit unsigned number // valid range is 0 to 65535 // Input: none // Output: 16-bit unsigned number // If you enter a number above 65535, it will truncate without an error // Backspace will remove last digit typed unsigned short SCI1_InUDec(void) unsigned short number=0, length=0; char character; character = SCI1_InChar(); while(character!=CR) // accepts until carriage return input // The next line checks that the input is a digit, 0-9. // If the character is not 0-9, it is ignored and not echoed if((character>='0') && (character<='9')) number = 10*number+(character-'0'); // this line overflows if above 65535 length++; SCI1_OutChar(character); // If the input is a backspace, then the return number is // changed and a backspace is outputted to the screen else if((character==BS) && length) number /= 10; length--; SCI1_OutChar(character); character = SCI1_InChar(); return number;
Page 259 of 274
//----------------------SCI1_InULDec------------------------------- // InULDec accepts ASCII input in unsigned decimal format // and converts to a 32 bit unsigned number // valid range is 0 to 4,294,967,296 // Input: none // Output: 32-bit unsigned number // If you enter a number above 4294967296, it will truncate without an error // Backspace will remove last digit typed unsigned long SCI1_InULDec(void) unsigned long number=0, length=0; char character; character = SCI1_InChar(); while(character!=CR) // accepts until carriage return input // The next line checks that the input is a digit, 0-9. // If the character is not 0-9, it is ignored and not echoed if((character>='0') && (character<='9')) number = 10*number+(character-'0'); // this line overflows if above 4294967296 length++; SCI1_OutChar(character); // If the input is a backspace, then the return number is // changed and a backspace is outputted to the screen else if((character==BS) && length) number /= 10; length--; SCI1_OutChar(character); character = SCI1_InChar(); return number; //----------------------SCI1_InSDec------------------------------- // InSDec accepts ASCII input in signed decimal format // and converts to a 16 bit signed number // valid range is -32768 to +32767 // Input: none // Output: 16-bit signed number // If you enter a number outside +/-32767, it will truncate without an error // Backspace will remove last digit typed signed int SCI1_InSDec(void) signed int number=0, length=0; char sign = 0; // '0': pos, '1': neg char character; character = SCI1_InChar(); while(character!=CR) // accepts until carriage return input // The next lines checks for an optional sign character ('+' or '-') // and then that the input is a digit, 0-9. // If the character is not 0-9, it is ignored and not echoed if(character=='+') SCI1_OutChar(character);
Page 260 of 274
else if(character=='-') sign = 1; SCI1_OutChar(character); else if((character>='0') && (character<='9')) number = 10*number+(character-'0'); // this line overflows if above 4294967296 length++; SCI1_OutChar(character); // If the input is a backspace, then the return number is // changed and a backspace is outputted to the screen else if((character==BS) && length) number /= 10; length--; SCI1_OutChar(character); character = SCI1_InChar(); if(sign == 1) return -number; else return number; //----------------------SCI1_InSLDec------------------------------- // InSLDec accepts ASCII input in signed decimal format // and converts to a 32 bit signed number // valid range is -2,147,483,648 to +2,147,483,647 // Input: none // Output: 32-bit signed number // If you enter a number outside +/-2147483648, it will truncate without an error // Backspace will remove last digit typed signed long SCI1_InSLDec(void) signed long number=0, length=0; char sign = 0; // '0': pos, '1': neg char character; character = SCI1_InChar(); while(character!=CR) // accepts until carriage return input // The next lines checks for an optional sign character ('+' or '-') // and then that the input is a digit, 0-9. // If the character is not 0-9, it is ignored and not echoed if(character=='+') SCI1_OutChar(character); else if(character=='-') sign = 1; SCI1_OutChar(character); else if((character>='0') && (character<='9')) number = 10*number+(character-'0'); // this line overflows if above 4294967296 length++; SCI1_OutChar(character); // If the input is a backspace, then the return number is // changed and a backspace is outputted to the screen else if((character==BS) && length) number /= 10; length--; SCI1_OutChar(character);
Page 261 of 274
character = SCI1_InChar(); if(sign == 1) return -number; else return number; //-----------------------SCI1_OutUDec----------------------- // Output a 16-bit number in unsigned decimal format // Input: 16-bit number to be transferred // Output: none // Variable format 1-5 digits with no space before or after void SCI1_OutUDec(unsigned short n) // This function uses recursion to convert decimal number // of unspecified length as an ASCII string if(n >= 10) SCI1_OutUDec(n/10); n = n%10; SCI1_OutChar(n+'0'); /* n is between 0 and 9 */ //---------------------SCI1_InUHex---------------------------------------- // Accepts ASCII input in unsigned hexadecimal (base 16) format // Input: none // Output: 16-bit unsigned number // No '$' or '0x' need be entered, just the 1 to 4 hex digits // It will convert lower case a-f to uppercase A-F // and converts to a 16 bit unsigned number // value range is 0 to FFFF // If you enter a number above FFFF, it will truncate without an error // Backspace will remove last digit typed unsigned short SCI1_InUHex(void) unsigned short number=0, digit, length=0; char character; character = SCI1_InChar(); while(character!=CR) digit = 0x10; // assume bad if((character>='0') && (character<='9')) digit = character-'0'; else if((character>='A') && (character<='F')) digit = (character-'A')+0xA; else if((character>='a') && (character<='f')) digit = (character-'a')+0xA; // If the character is not 0-9 or A-F, it is ignored and not echoed if(digit<=0xF ) number = number*0x10+digit; length++; SCI1_OutChar(character); // Backspace outputted and return value changed if a backspace is inputted else if(character==BS && length) number /=0x10; length--; SCI1_OutChar(character); character = SCI1_InChar(); return number;
Page 262 of 274
//--------------------------SCI1_OutUHex---------------------------- // Output a 16 bit number in unsigned hexadecimal format // Input: 16-bit number to be transferred // Output: none // Variable format 1 to 4 digits with no space before or after void SCI1_OutUHex(unsigned short number) // This function uses recursion to convert the number of // unspecified length as an ASCII string if(number>=0x10) SCI1_OutUHex(number/0x10); SCI1_OutUHex(number%0x10); else if(number<0xA) SCI1_OutChar(number+'0'); else SCI1_OutChar((number-0x0A)+'A'); //#endif /* ERASE */ // Outputs a 16 bit number by sending the most significant byte // followed by the least significant byte. extern void SCI1_Out16(short number) byte output = (number&0xFF00)>>8; SCI1_OutChar(output); output = number&0x00FF; SCI1_OutChar(output);
settings.h /* Timer interrupt rate is currently every 2.73ms */ /* Port B status lights */ #define HBLED PORTB_BIT0 #define SCI_RECEIVED_LED PORTB_BIT1 #define SCI_TRANSMIT_LED PORTB_BIT2 //#define PORTH_INTERRUPT_LED PORTB_BIT3 #define PORTJ_INTERRUPT_LED PORTB_BIT3 //#define SCI_TRANSMIT_TOGGLE PORTB_BIT3 #define SCI_INTERRUPT_LED PORTB_BIT4 #define COMM_TRANSMIT_LED PORTB_BIT5 #define COMM_UPDATING_LED PORTB_BIT6 #define COMM_RECEIVING_LED PORTB_BIT7 /* porth.c */ #define ENCODER_RESOLUTION 256 #define NUM_ENCODERS_H 3 /* other port */ #define NUM_ENCODERS_O 2 /* both */ #define NUM_TOTAL_ENCODERS 6 /* comm.c */ //Size of the storage buffer for serial input in bytes
Page 263 of 274
#define STORED_SIZE 12 //Enable for terminal interface (for debugging purposes) #define TERMINAL 0 //Communications timeout is approximately 100ms. //#define TIMEOUT 30 #define TIMEOUT 30000 /* motor.c */ /* pwm.c */ #define NUM_CHANNELS 5 #define SERVOMIN 864 #define SERVOMAX 3181 #define SERVORANGE (SERVOMAX-SERVOMIN) /* display.c */ #define LINE0DISPLAY "Torque 4:000000" #define LINE1DISPLAY "Torque 3:000000"
timer.h /**********************************************************************************/ /* Header file timer.h */ /**********************************************************************************/ #ifndef _TIMER_H_ #define _TIMER_H_ /* declare initialisation function TOC7_Init */ extern void TOC7_Init(void); /* declare interrupt service routine TOC7_ISR */ extern __interrupt void TOC7_ISR(void); extern unsigned short interruptCount; #endif _TIMER_H_
timer.c /* ************************ timer.c **************************** * Simple timer (TC7) * ************************************************************ */ #include <mc9s12dp256.h> /* derivative information */ #include "settings.h" #include "comm.h" unsigned short interruptCount; /* interrupt handler, timer 7 (vector 15) */ __interrupt void TOC7_ISR(void) TIE &= 0x7F; //Disable timer interrupt (to prevent recursion) asm cli //Enable other interrupts interruptCount++;
Page 264 of 274
comm_TimerUpdate(); //Updates the communications timer (for timeouts) TFLG1 = 0x80; // acknowledge interrupt (clears C7F) TIE |= 0x80; //Enable timer interrupt // the interrupt handler TOC7handler is called every 'timerValue' ticks // (note: the period of the LED should be twice as long [on phase, off phase]) // example: timerValue = 40000, prescale value = 001 (divide-by-2 -> 500 ns) // timer period : 40000 * 500 ns = 20 ms #define timerValue 0xFFFF void TOC7_Init(void) interruptCount=0; TIOS = 0x80; // Configure channel 7 for output compare (OC) mode TCTL1 &= 0x3F; // 00xx.xxxx -> OM7 = OM7 = 0 (disconnect pin 'IOC7' from o/p logic) OC7M = 0x00; // do not affect any of the linked output pins (IOC0 - IOC6) /* Bottom three bits of TSCR2 (PR2,PR1,PR0) determine TCNT period divider resolution maximum period 000 1 42ns 2.73ms 001 2 84ns 5.46ms 010 4 167ns 10.9ms 011 8 333ns 21.8ms 100 16 667ns 43.7ms 101 32 1.33us 87.4ms 110 64 2.67us 174.8ms <--------------- 111 128 5.33us 349.5ms */ TSCR2 = 0x00; // Divide clock by 128, disable TOI (timer overflow interrupt) // TCRE = 1 (reset TCNT upon output compare match of channel 7) TC7 = timerValue; // initialize timer register (channel 7) TIE = 0x80; // enable channel 7 interrupt TSCR1 = 0x80; // enable timer (set TEN bit)
Page 265 of 274
Appendix G – Documentation of Main Robot Class
The control program was written in C++ using object orientation programming. In this
section the main class ‘Robot’ will be defined and also a description of its major
functions which should assist anyone who continues the project in future years
Robot Object
The robot class allows any variation of a serial revolute robotic manipulator to be defined
and controlled. The robot constructor takes in all the parameters that are unique to the
individual robot that are needed to be controlled. The below is a complete list of robot
constructor parameters.
RFLOAT* mass: A vector containing the each link mass
RFLOAT DHinit[6][4]: Matrix defining the Denavit-Hartenberg parameters
RFLOAT inertiainit[6][6]: Matrix containing the moments of inertia at the centre of mass
of each link
RFLOAT gravity[3]: Gravity Vector which depends on the orientation of the coordinate
system 0
RFLOAT* coulomb and RFLOAT* viscous: Vectors containing the coulomb and viscous
friction coefficients respectively
RFLOAT* posGains, RFLOAT* velGains: Vectors containing the gains to be used in the
control system
RFLOAT* torqueConstants: Torques constants of each of the actuators
RFLOAT* gearRatios: Gear Ratios of the actuators
RFLOAT* currentLimits: Vector that contains the max allowable currents the actuators
can be given.
RFLOAT* rotorInertias: Vector containing the rotor inertias of each of the robots motors
RFLOAT* initialPos: The initial positions of the robot
RFLOAT* readPos: The ready positions of the robot
Page 266 of 274
Each of the above input parameters are used to define each individual link of the robot. In
all robot calculation is easier to have these parameters separated into links. For this
purpose on the declaration of a new robot an array if JointState() objects is created where
each JointState() object contains all the information specific to each link .
Once all of these parameters have being defined a robot object has being created to which
the following listing of methods can be applied to.
RobotSetup()
This function does all the initial processing that can be done offline. This is done to
reduce the amount of computations required while the main program is running.
UpdateState()
Update State calculates all the matrices and vectors required for the Torque calculations.
It does this based on the information taken from the current state of each of the joints of
the robot.
NewtonEuler()
This method calculates the dynamics of the robot by applying the Newton-Euler
equations. It places the calculated torques in a vector called desiredTorques.
GetCamera()
Get Camera checks the serial port connected to the vision computer. If new positions
information is available then that information is used as a index to look up the desired
shot information from a lookup table. If the program is running in simulator mode then
the index is taken from the user information entered in through the Graphical User
Interface. This method returns true are a new shot update has being found.
UpdateDesiredTrajectory()
Page 267 of 274
UpdateDesiredTrajectory() calculates the desired position, velocity and acceleration of
each link based on the current trajectory of the robot. If does this by first calculating the
current shot time through a call to GetCurrentShotTime() and then calling method inside
each links trajectory object EvaluateTrajectory() which returns a vector containing the
desired joint states.
RequestRealEndcoders()
RequestRealEndcoders() reads in the current joint positions of the robot. It does this via
the serial connection to the microcontroller. The program first waits for the appropriate
start byte and then reads in bytes until the end byte has been sent. The program then
checks if exactly six bytes have been sent between receiving the start and end byte. Each
of these six bytes are associated with number of encoder ticks since the last update. If
there are less then six joints on the current robot the values for the non-existing joints will
be zero and consequently ignore. The current set of data is thrown away if less then six
bytes have been sent. The positions and velocity of each joint are then calculated and the
variable currentPositions and currentVelocity are updated.
ConvertCurrents()
ConvertCurrents() takes the torque values as calculated by NewtonEuler() and converts
them into the associated currents to be sent to the micro controllers. It also limits the
currents to the max values as defined in the Robot object described earlier.
The following two Methods ChangeTrajectory()and Update Motors() form the main loop
in the control system. These two methods called alternately whenever the processor is
free to do so.
ChangeTrajectory()
ChangeTrajectory() is primarily responsible for calculating new trajectories if new
position information is available or the current state of robot has change. This method
Page 268 of 274
functions as a state machine and updates joint information based on what state the robot
is in. If the robot is in a mode that allows it to receive camera data then
ChangeTrajectory() calls GetCamera() as described above. It then makes a new trajectory
object based on this data and stores joints trajectories in the vector jointTrajectories. The
final function of ChangeTrajectory() is determining what state the robot is in. It does that
by checking the transition variables of each state as well as checking for any user input
form the GUI.
UpdateMotors()
UpdateMotors() is responsible for determining the correct currents to be sent to the
microcontrollers and then sending them. It first calculates the required the currents by
making a call to GetCurrents(). GetCurrents() updates the current joint states through a
call to RequestEncoders() and then evaluates the desired joint states by calling
UpdateDesiredTrajectory(). GetCurrents() then uses this information to evaluate the
control law in order to calculate the desired acceleration set-point. UpdateState(),
NewtonEuler() and ConvertCurrents() are then called which result in the current values
being produced which are then sent back to UpdateMotors(). UpdateMotors then sends
the values to the microcontroller via the same protocol as describe under
RequestRealEncoders().
Once the microcontrollers have being updated ChangeTrajectory() is called again and the
loop continues until the robot is deactivated.
Page 269 of 274
Appendix H – Connection guide
Dragon 12 Board
Port B
Port B contains several status lights:
Bit Behaviour
0 Heartbeat led – toggles approximately every half second.
1 Toggles on every received byte from the serial port.
2 On when data is ready to be sent across the serial port.
3 Activated when encoder data is being analysed. This is currently enabled for Port J,
and can be modified.
4 On during serial interrupt routine.
5 Toggles on serial packet transmission.
6 Toggles when motor data is updated.
7 On while a packet is being received but not yet complete. Will stay visible until the
packet is completed or a timeout occurs.
Port H
Bit Connection Behaviour
Port
Port
Port
Port Port
Vcc
SCI0
SCI1
Page 270 of 274
0 Encoder 1 Channel A
1 Encoder 1 Channel B
2 Encoder 2 Channel A
3 Encoder 2 Channel B
4 Encoder 3 Channel A
5 Encoder 3 Channel B
6 Encoder 4 Channel A
7 Encoder 4 Channel B
Port H interrupt is called on
data change.
Port T
Bit Connection Behaviour
0 Direction Control Motor 1
1 Direction Control Motor 2
2 Direction Control Motor 3
3 Direction Control Motor 4
4 Direction Control Motor 5
1 when the output current is
positive, 0 when negative.
5 Not Connected
6 Not Connected
7 Not Connected
Port P
Bit Connection Behaviour
0 Low pass filter 1 input
1 Low pass filter 2 input
2 Low pass filter 3 input
3 Low pass filter 4 input
4 Low pass filter 5 input
This signal is passed through
the low pass filter in order to
create the analog input for the
current control circuits.
5 Not Connected
6 Not Connected
7 Servo Motor Signal
Port J
Bit Connection Behaviour
0 Not Connected
1 Not Connected
6 Encoder 5 Channel A
7 Encoder 5 Channel B
Port J Interrupt is called on
data change.
SCI0
Page 271 of 274
This serial port is connected to system running Metroworks Codewarrior for program
download and maintenance.
SCI1
This serial port is connected to computer running the control system.
Notes
The +5V power outlet on the board is capable of powering each of the encoders, however
the board may require more power than usual to operate in this fashion. If the board’s
power supply is significantly reduced by the encoder connections, the board will reset
intermittently.
LMD18245 Current Control Board
Label Connection Behaviour
5V Not Connected +5V, not used
Dir Port T bit N Direction toggle
Br Not Connected Brake motor when high
M1 Not Connected
M2 Not Connected
M3 Not Connected
M4 Not Connected
M1…M4 represent a 4 bit number, M, with M4 being
the most significant bit. The output current is scaled
by M/16.
Gnd Dragon 12 Ground
Label Connection Behaviour
DAC Ref Low pass filter N Analog 0-5V signal indicating the current output.
Gnd Dragon 12 Ground
Label Connection Behaviour
Page 272 of 274
DC + Power Supply
DC - Power Supply Power is supplied with the required motor values.
Label Connection
Motor + Motor
Motor - Motor
Page 273 of 274
14 References
Andersson, RL 1988, A Robot Ping-Pong Player: Experiment in Real-Time Intelligent
Control, The MIT Press, London
Mayer, G, Shimano, BE & Paul, RP, 1981,’Kinematic Control Equations for Simple
Manipulators’, IEEE Transactions of Systems, Man, and Cybernetics, vol. SMC-11, no.
6, pp.449-455
Khalil W, 2002 ‘Modelling, Identification and Control of Robots’, Hermes Penton Ltd
Lee, CSG 1982, ‘Robot Arm Kinematics, Dynamics and Control,’ Computer, vol. 15, no.
12, pp.62-80.
Piper, DL 1968, The Kinematics of Manipulators under Computer Control, Computer
Science Department, Stanford University, Artificial Intelligence Project Memo No. 72.
Siciliano, B & Sciavicco, L 1999, Modelling and Control of Robot Manipulators,
Springer-Verlag, London
Lee, CSG 1983, Robot Arm Kinematics, The University of Michigan, Michigan
Lee, CSG 1983, Robot Arm Dynamics, The University of Michigan, Michigan
Paul, RP 1981, Robot Manipulators: Mathematics, Programming and Control, M.I.T.
Press, pp. 41-84
Colson, JC & Perrira, ND 1983, ‘Kinematic Arrangements Used in Industrial Robots,’
Proceedings of the 13th International Symposium on Industrial Robots, Chicago, Illinois,
pp. 20-1 to 20-18.
Lee, CSG, Lee, BH & Nigam, R 1982, ‘An Efficient Formulation of Robot Arm
Dynamics for Control Analysis and Manipulator Design,’ Technical report RSD-TR-82,
Centre for Robotics and Integrated Manufacturing, University of Michigan
Hollerbach, JM 1980, A Recursive Lagrangian Formulation of Manipulator Dynamcis
and a Comparative Study of Dynamics Formulation Complexity, IEEE Trans, on
Systems, Man and Cybernetics, vol. SMC
Turney, JL, Mudge & Lee, CSG 1980, ‘Equivalence of Two Formulations for
Formulations for Robot Arm Dynamics,’ SEL Report 142, ECE Department, University
of Michigan
Page 274 of 274
Craig, JJ 2005, Introduction to Robotics Mechanics and Control, Pearson Education, Inc.
Upper Saddle River.
Gruen, A 2001, Calibration and Orientation of Cameras in Computer Vision (Springer
Series in Information Sciences), 1st edn, Springer
Linder, W 2003, Digital Photogrammetry : Theory and Applications, 1st edn, Springer
Nixon, M, Aguado, AS 2002, Feature Extraction in Computer Vision and Image
Processing, 1st edn, Reed Educational and Professional Publishsing Ltd
Parker, JR 1997, Algorithms for Image Processing and Computer Vision, John Wiley &
Sons Inc
Sood, AK, Wechsler, H 1992, Active Perception and Robot Vision (Nato a S I Series
Series III, Computer and Systems Sciences), Springer
Vernon, D 2000, Computer Vision, Eccv 2000: 6th European Conference on Computer
Vision Dublin, Ireland, June 26-July 1, 2000 Proceedings (Lecture Notes in Computer
Science), 1st edn, Springer-Verlag Telos
Davis, L, Mohay, H & Edwards, H 2003, ‘Mothers' involvement in caring for their premature
infants: an historical overview’, Journal of Advanced Nursing, vol. 42, no. 6, pp. 578–86.