Annotation in Augmented Reality - cs.anu.edu.au · PDF fileANNOTATION IN AUGMENTED REALITY...
-
Upload
doannguyet -
Category
Documents
-
view
216 -
download
1
Transcript of Annotation in Augmented Reality - cs.anu.edu.au · PDF fileANNOTATION IN AUGMENTED REALITY...
Australian National University COMP4560
ANNOTATION IN AUGMENTED REALITY Abhishek Kookana
Supervised by Matt Adcock
CSIRO Data61
1
Acknowledgement
I would like to express my appreciation to my supervisor Matt Adcock,
who was both very helpful and supportive during this project. Due to
personal circumstances, I was unavailable to meet or work on this project
for multiple days every week and Matt was understanding and nothing but
helpful. Matt also provided me guidance when I was having great
difficulties overcoming issues with the core of the project and without his
guidance I would not have completed it.
I would also like to thank Weifa Liang and Peter Strazdins for their
guidance and effort in managing this course. The weekly meetings
provided useful suggestions and insights for the project and directed my
focus to where it mattered more.
Finally, I would like to thank my family for putting up with too many late
nights and myself being constantly fatigued.
2
Declaration
This project was conducted between February 2017 and May 2017 for
COMP4560 Advanced Computing Project.
Except where otherwise indicated, I hereby declare that the work
presented in this project is solely my original work and has not been
previously submitted either in whole or in part for a degree in any
university.
Abhishek Kookana
u5586286
Australian National University
Canberra, ACT, Australia
3
Table of Contents
Acknowledgement ............................................................................ 1
Declaration ...................................................................................... 2
1.0 Abstract ................................................................................... 4
2.0 Introduction ............................................................................. 5
2.1 Related work ......................................................................... 6
3.0 Theoretical Framework/ Software Architecture .............................. 7
3.1 Highlighting ........................................................................ 7
3.1.1 Vertex Colouring ................................................................. 8
3.1.2 Retexturing ........................................................................ 8
3.1.3 Submeshes ...................................................................... 10
3.2 Leap Motion ......................................................................... 11
3.3 Interaction .......................................................................... 13
4.0 Performance Evaluation ........................................................... 15
4.1 Technical Performance ............................................................ 15
4.2 Functional Performance ........................................................... 16
4.3 Reflection .............................................................................. 17
5.0 Conclusion ............................................................................. 19
5.1 Future Work .......................................................................... 19
6.0 References ............................................................................. 21
7.0 Appendices ............................................................................ 22
4
1.0 Abstract
Augmented reality allows users to see real-time computer generated
images in stereoscopic 3D and interact with them whilst retaining their
view of the real world. This provides a sense of scale, perspective and
comparison that a flat screen does not. When combined with 3D scanned
objects, AR can provide a perfect avenue to see real world objects
represented virtually in the real world.
Rather than simply display them, it is useful to be able to highlight and
label them to convey relevant information about the object. This is the
purpose of annotation. Currently there does not exist an efficient or
intuitive method to annotate 3D models within Augmented Reality, only to
do so outside AR and import the annotation. This project aims to examine
the best ways to fulfil this task.
Using a hand tracking device, called the Leap Motion, different methods of
annotation were investigated and tested. The method found that worked
best was to use submeshes with different textures to highlight areas of
the model. This was done in two ways: drawing highlights onto the model
directly and selecting areas to highlight with a volume.
It was found that methods of manipulating annotations requiring precision
like drawing were ineffective using the Leap Motion. Larger movements
like volume selection are more effective and allow the user to translate
their movements into annotations more purposefully. The Leap Motion
can provide a capable platform for annotation as an intuitive and almost
tactile experience when combined with the appropriate interactions.
5
2.0 Introduction
Augmented reality is a technology designed to superimpose a computer-
generated image on a user’s view of the real world and is intended to
mesh real world and virtual interaction. Whilst the everyday public are
becoming more aware of the uses of virtual reality, augmented reality is
taking a different approach and is currently being offered as an enterprise
product rather than a consumer one. The Microsoft HoloLens is currently
the most complete offering of augmented reality and is marketed as “the
first self-contained, holographic computer, enabling you to engage with
your digital content and interact with holograms in the world around
you”1 .The development edition of HoloLens has been available to buy
since March of 2016 and since then companies have built programs taking
advantages of the HoloLens’ unique features.
Data61, a branch of the CSIRO focused on digital innovation2, is one such
organisation that has been working in the field of VR and AR technology
(among others) for years and proposed this task. Having access to 3D
scanning technology, it is often useful to annotate these scans with
valuable information in order to categorise and contextualise the scanned
data. Currently there are extremely limited means to annotate such
models within the HoloLens environment, with the input method being
limited to a point-and-click methodology. Efficient annotation is limited to
non-AR platforms. Augmented reality provides benefits as an annotation
platform as it can allow for a better perspective on the models; they can
be seen in 3D space and scaled relative to their environment.
1 Microsoft, 2016, Microsoft HoloLens, https://www.microsoft.com/en-us/hololens,
accessed 20/05/17
2 Swan, D., 2015, CSIRO swallows up NICTA to create Data61,
http://www.theaustralian.com.au/business/business-spectator/csiro-swallows-up-nicta-
to-create-data61/news-
story/cec8a743d703964f4bbebd63ce28130b?nk=a43987671934270192e7a243d2d64b9
1-1494172731 , accessed 15/05/17
6
The input methods for the HoloLens include basic gestures to activate a
mouse click and bring up a menu interface. Other than this, a dot is
projected in front of the user imitating a mouse pointer that the user can
use to point at different interface elements. The benefit of this type of
user interface means there are no extra peripherals that need to be
interacted with, the interaction occurs comfortably in the user’s hand. In
order to take advantage of a wider range of gestures and input whilst
retaining this hands-on approach to interaction, this project takes
advantage of the Leap Motion, a small hand tracking device in order to
translate more complex hand motion into annotation generation.
The aim of the project is to provide a platform to make annotations to 3D
models in an intuitive fashion whilst retaining both the perspective
benefits of the HoloLens platform but providing a more effective input
method. The project achieves this best when using broad strokes that are
then refined into precise selections rather than defining selections with
precise movements. The Leap Motion provides an intuitive but unreliable
platform in terms of precision and this reflects in the results.
2.1 Related work
Annotation in virtual or even augmented reality is not a new concept. The
Institute of Electrical and Electronics Engineers (IEEE) holds an annual 3D
User Interface Contest3 which tasks teams to generate 3D user interfaces
to annotate point clouds from 3D scans. These entries are generally
executed on the desktop platform, but have provided much of the
inspiration for the design of the user interface and interaction. The Leap
Motion has been used for entries in the past and has been shown to be a
useful resource when dealing with intuitive 3D user interaction.
The work of Benjamin Nuernberger4 displayed the potential for a fully
realised annotation system in augmented reality, but rather than dealing
3 IEEE, 2014, Contestants, http://3dui.org/2014/contestants.htm , accessed 28/02/17 4 Nuernberger, B., 2017, Benjamin Nuernberger, http://cs.ucsb.edu/~bnuernberger/,
date accessed 28/02/17
7
with 3D models, his system annotates physical space. His project displays
the potential for gesture control to deliver an intuitive annotation platform
in augmented reality. The project is based on the HoloLens inbuilt
gestures and allows him to use a click-and-drag method with the mouse
pointer to draw shapes that represent annotations.
These projects provide some inspiration for a project that has not been
done before and the core contribution of this project the uniqueness of its
platform combined with the minimal amount of pre-existing projects
conducted in the area. This project provides the first annotation platform
on the HoloLens to take advantage of the Leap Motion as an input device
and investigates the benefits and losses of this style of interaction.
3.0 Theoretical Framework/ Software Architecture
This project is focused on investigating what annotation techniques work
best when paired with Leap Motion and Augmented reality. As such, it is
worth noting that the program developed does not support saving and
labelling annotations as it is the techniques that are the focus, not the
function of the overall program.
3.1 Highlighting
The basis for this project has been to find an efficient method to annotate
3D models that is both robust enough to convey the same information as
existing methods as well as integrate appropriately with the hardware
used in the project.
This project has iterated through multiple methods of annotation in order
to find the most suitable for this format. The method of annotating a
model in this project is to be able to highlight any given segment of the
model in a clear and accurate manner as we felt this conveyed the most
information.
8
3.1.1 Vertex Colouring
Initially we investigated the method of using customised Unity Shaders to
change the appearance of different areas of the model dynamically.
Different areas of the model would be recoloured by changing the colour
values at each vertex. The problem with this was that rather than
producing clean lines, if the vertices were recoloured, each vertex would
blend into the colour of its neighbouring vertex, providing blurry lines for
the annotated regions.
Figure 1. How vertex colour shows on a standard and altered shader5
The problem with this is that recent Unity updates have deprecated the
vertices.color attribute in Unity and it would require a workaround that
would remove the project’s futureproofing completely and lock it to a
specific build of Unity older than even the current. At this stage, we
reconsidered options that would work in current and future versions of
Unity.
3.1.2 Retexturing
We investigated the potential of retexturing the model dynamically and
writing to the texture file at runtime. This would allow for any model to be
5 Defaxer, 2010, Standard Shader with Vertex Colours,
https://forum.unity3d.com/threads/standard-shader-with-vertex-colors.316529/,
accessed 20/05/17
9
placed into the program with minimal extra formatting required, the user
would simply set the texture to allow for reading and writing data with a
checkbox. This was implemented by performing a ray-cast from the user’s
finger (or rather, the finger represented by the hand tracked by the Leap
Motion) and investigating where it intersected with the 3D model. At this
point, the coordinates of intersection were calculated and the pixels
around this contact point were changed to a specific colour to convey a
highlighted area for annotation.
The problem with this approach is the way in which Unity maps textures
to 3D models:
Figure 2. how the texture of an object(left) can non-contiguously map to the object(right)6
Two adjacent points in a texture file don’t necessarily correspond to
adjacent points on a 3D model as different areas of the texture file “map”
to areas of the model. There were three options to deal with this issue.
One potential solution was that each pixel being changed would have to
have a test performed to see whether the points were adjacent, however
doing this for every pixel in real time would cause massive issues with
performance and overhead. Another solution was to reconfigure the
texture map in order to map appropriately, however this is a task that
would require an external program and would require extra work both
within the project and by the user. Every model input into the program
6 WyrmTale Games, 2015, Dice Pack Light, http://www.wyrmtale.com/, date accessed
20/05/17
10
would require a reformatting of its texture by the user before inputting it
in the project. Thus, the third option was to find an alternate annotation
solution that worked regardless of the format of the model.
3.1.3 Submeshes
The implemented method to annotate the models in this project is to alter
the submesh array at runtime.
Figure 3. Subject with annotation (left), submesh 1 (middle), submesh 0 (right)7
A mesh is simply the set of points that make up a 3D model in Unity. A
submesh is simply an array of polygons that make up a subset of a mesh.
Submeshes are each assigned a texture and are designed to designate
which polygons of a mesh have which texture; i.e. it allows for a single
model to have multiple textures assigned to it. In this project, submesh 0
is simply the submesh containing the entire model and all of the polygons.
Submesh 1 is the submesh we alter as any polygons added to this
submesh are overlaid over the first, producing highlighted areas.
Dynamically altering this at runtime allows the user to change the texture
of specific polygons on command, thus allowing for it to be a method of
annotation. The beauty with this approach is that it avoids the issues of
the previous approaches.
7 Organic Polygons, 2015, Lion Statue, http://organicpolygons.com/, accessed 20/05/17
11
Because they deal with entire polygons rather than single vertices, they
produce clean lines at the edges of the highlighted regions as these are
simply the edges of the object. Submeshes have also been in Unity for
years and are unlikely to be deprecated any time soon, so there should be
no issue with futureproofing. Concerning formatting, every model
necessarily has a mesh, so the project will accept any standard 3D
modelling format that Unity does. Polygons for meshes are also stored
efficiently in a “triangle array”. Each polygon in Unity takes the format of
a triangle and each triangle is stored as a set of three integers in an array.
Each integer references the index of its relevant vertex in the array of
vertices for the relevant object.
There are two methods for annotating using submeshes in this project.
The first, using a similar method to retexturing in section 3.1.2, a ray is
cast from the user’s finger and a point is returned representing its
intersection with the 3D model. This point gives the index for the triplet in
the triangle array for the relevant triangle. These indices are then copied
from submesh 0 into submesh 1, producing a highlighted area at the
relevant polygon. When deselecting the polygons, these indices in
submesh 1 are set back to 0, removing the highlighting for that section
and returning it to the normal texture. The second method is using a
volume to select multiple polygons at once. A volume, either a sphere or
a cube in this case, is placed intersecting the 3D model. The program
then iterates through each vertex in the program and does basic integer
mathematics with its coordinates to find out whether the vertex is
contained within the volume and if it is, it is highlighted.
3.2 Leap Motion
The Leap Motion8 is the chosen input device for this project and whilst
being an extremely intuitive device to use, has provided its own
difficulties to the project. The Leap Motion has been available since 2013
8 Leap Motion, 2017, Leap Motion, https://www.leapmotion.com/, date accessed
20/05/17
12
but has recently received a large-scale update to its software and
development suite called Orion. Orion improved the tracking capabilities
and provided a range of inbuilt input methods that were invaluable to the
project and avoided the necessity to program a custom gesture-
recognition system. Instead, the majority of the difficulty with the Leap
Motion was in the tracking data itself.
To have a Leap Motion work on a Microsoft HoloLens is far from trivial.
The HoloLens has a USB port on it, but rather than being able to plug the
Leap Motion into it directly, it is reserved for updates and charging. There
is no method to connect the Leap Motion directly to the HoloLens. In
order to have the devices communicate they must do so wirelessly.
The problem with streaming tracking data to Unity is that once the data is
stripped raw and transmitted, it loses the ability to integrate with the
functionality provided by Orion. This is due to the fact that the “wrapper”
in which the object is stored that allows it to communicate with the inbuilt
functions of Orion is discarded when transmitting via websockets9. Only
the pure tracking data is transmitted as a stream of coordinates. Fixing
this would require a re-write of either the Orion system to accommodate
this alternate datatype or a rewrite of the websocket API.
The solution to this problem was rather than transmit the tracking data,
we instead ran Unity on a PC and transmitted the entire instance of Unity
to the HoloLens using an application called HoloLens Remoting. Remoting
was originally designed for programs that were too demanding to run on
the HoloLens hardware standalone, usually in a mid-development phase.
Remoting provides a way to have the project interact very quickly with
the HoloLens without having to export, compile and transfer it to the
headset and as such is very useful to developers. Remoting takes
advantage of the power of the computer it is connected to and transmits
9 Leap Motion, 2016, Websocket Communication,
https://developer.leapmotion.com/documentation/unity/supplements/Leap_JSON.html,
accessed 20/04/17
13
the image to the HoloLens whilst transmitting the tracking data for the
HoloLens back to the PC simultaneously, all with low latency.
The benefit to this method is that it allows for consistent development and
execution, as Unity essentially runs identically on the desktop normally as
it would with the HoloLens. The downside however is that an instance of
Unity is required on the PC side, rather than simply a standalone program.
This is likely to change in future iterations of Unity and the program is
built with no dependencies on specific display methods. If a new method
were to release to display PC content on a HoloLens, this program would
adapt to it easily.
It is worth noting that the technical requirements for the PC connected to
the HoloLens are reasonably high-end. The main requirement is a Nvidia
GTX970 minimum or equivalent for a graphics card10. This is what all the
tests have been done on and it is arguable that a user with access to a
~$4000 AUD HoloLens will have access to a computer with those
capabilities considering the GTX970 is the best budget performer of the
last generation of graphics cards. In this circumstance however, the main
purpose of this project is to investigate annotation techniques rather than
the specific implementation of the annotation program.
3.3 Interaction
The choice for interaction methods stemmed from some of the 3D UI
Contestants and were designed to be as intuitive as possible to
manipulate the model with the user’s hands. The Orion suite was then
consulted to see what could be altered from a pre-existing tool and what
would need to be developed completely. The resulting actions were pinch-
to-drag, double-pinch-to-scale and point-to-paint.
Point-to-paint allows the user to essentially have a small sphere attached
to their virtual finger that acts as a brush and use this highlight the
10 Microsoft, 2016, Holographic Remoting Player, https://developer.microsoft.com/en-
us/windows/mixed-reality/holographic_remoting_player, date accessed 20/05/17
14
polygons in the model by simply touching them. This action is reminiscent
of finger painting and provided the device is calibrated correctly, can
provide controlled highlighting of individual or small clusters of polygons.
This works simply by casting a ray from the end of the finger to the object.
When the object is hit, the index of the polygon impacted is returned and
highlighted. This is only triggered if the “brush” is contacting the object.
Pinch-to-drag and double-pinch-to-scale is a control mechanism that
allows for manipulation of either the 3D model itself or a selection volume.
There are two selection volumes available in the program, the user can
either select or deselect an area to highlight using either a sphere or cube.
Either object provides a real-time representation of where they overlap
with the model using custom Unity shaders (a holdover from the initial
experimentation with shaders). These shaders project a texture onto the
3D model wherever the selection volume overlaps it, essentially providing
a guide to where it will be highlighted. This volume selection method
allows the user to annotate larger portions of the model at a time. The
pinching works by making a pinching motion in either or both hands,
allowing you to drag the selection volumes around and, in the case of
pinching with both hands, scale the volume by moving the hands further
apart or closer together. Pinch-to-drag also applies to the 3D model
allowing the user to reposition it in a more comfortable position.
The highlighting itself using the selection volumes works by iterating
through each polygon of the 3D model’s mesh and doing a simple check
to see whether each of its corners are contained within the selection
volume. In the case of the sphere, whether the distance to its centre is
less than the radius and in the case of the cube, whether the corner
points are within the Unity inbuilt bounding box. Each calculation is simply
an iteration through an array of indices, so it completes fairly efficiently,
and the resulting polygons are added to the appropriate submesh, as
mentioned in section 3.1.
15
4.0 Performance Evaluation
The main performance characteristic that this project is concerned with is
whether the system can successfully provide a reasonably efficient tool
for highlighting 3D models and that it does. The main form of annotation
that we have focused on in the project is selection and deselection of
different areas with highlighting and for that we have two separate tools
as mentioned in section 3.3.
Point-to-paint allows for fine selection of individual polygons whereas
volume selection allows for broader areas to be selected or deselected.
There are two categories for performance criteria for these methods, and
that is technical performance and functional performance, or, how it runs
and how it feels.
4.1 Technical Performance
Due to the fact that rather than running natively on the HoloLens the
program is running on HoloLens Remoting, specific framerates are difficult
to verify. From our experience, the program runs at below 60fps but well
above 40. It is perceivable that it is not running at a completely smooth
framerate of 60, but the performance is still useably smooth. Overall
although not perfect, the performance is well within usable parameters
and would improve if the project were exported to a standalone product.
The latency of the program is also within reason, there is a perceivable
delay when using the project if you pay attention to it, but it is not
noticeable whilst actively using the program. As the screen capture from
the HoloLens introduces more overhead and more latency, there is no
reliable way to measure the specific delay amounts.
As well as general performance, what should also be noted is the
presence or absence of performance spikes. In this case, even when
highlighting using volume selection, which performs an operation on all
the vertices in a model, there was no noticeable performance spike. By
16
this, there were no drops in performance that brought the framerate
down by any noticeable amount. Perhaps with a particularly complex
model with a large number of vertices this would cause a stutter, but with
a “medium” complexity model consisting of approximately 3000 polygons,
the highlighting action produced no visible stutter.
4.2 Functional Performance
The double-pinch-to-scale method worked well in testing and provided a
much more intuitive analogue for annotation than point-to-paint. This was
because interaction with the Leap Motion even in good conditions when
the device is properly calibrated can produce tracking “noise”. The Leap
Motion tracking camera only sees the hands from one perspective, so
when fingers are occluded by the palm of the hand or even one another,
the Orion software will attempt to guess what position they are in. This
can lead to unintentional finger bending which, when trying to paint
polygons on an object precisely can be frustrating.
Figure 5. point-to-paint using left finger
The Leap Motion works best when moving the hands 15-30cm above the
unit and, when making pinching motions, to pinch with only the index and
thumb and leave all other fingers relaxed. This allows the camera to
register what all fingers are doing more easily rather than only seeing a
jumble of fingers making a fist. The radius that the Leap Motion can track
is extensive and so long as the user is sitting in front of the device, they
should have no issues with it within reason. The effective comfortable
17
range in testing was found to be an area approximately 0.5 metres in
width above the Leap Motion.
Figure 6. Pinching motion on either side of sphere to resize it
In spite of finger occlusion issues, using broader movements to describe
object selection works well. Using volumes to select different areas to
annotate and using double-pinch to transform and scale them is intuitive
and works with minimal tracking anomalies. This is because even when
these anomalies occur, as the movement of the whole hand rather than
the fingers is what is tracking the input so the movement is much more
stable. The input to trigger the movement, the pinching, is tracked from
the fingers, but the movement of the object itself is tracked from the
palm.
4.3 Reflection The point-to-paint method of annotation allows users to paint polygons
with their finger to highlight them, but the Leap Motion lacks the precision
or consistency to provide this function accurately. The work of Benjamin
18
Nuernberger11, mentioned in section 2.1, shows that he is able to build a
system where fine lines and shapes can be drawn in 3D space to denote
annotations. The way in which he achieves this without encountering
issues with precision is that he takes the raw input from the user and
provides them a smoothed version of their annotation. Rather than
retaining the shaky lines drawn by the user, he provides them a line of
best fit. The importance of Nuernberger’s work is in the line of best fit.
Volume selection was the most successful method of annotation in this
project, the inspiration for which was provided by multiple projects from
the IEEE 3D User Interface Contest12, also mentioned in section 2.1. This
yearly contest produced great examples of effective 3D user interfaces;
the projects “Slice and Swipe” and “Touching the Cloud” were the most
influential for this project as they are both annotation software
implemented using the Leap Motion or similar device. Slice and Swipe is a
project that allows for a point cloud to be split into two halves with a
slashing motion and then one half is discarded. This can be done
repeatedly to refine the selection until the user is happy to highlight this
selection and annotate. Touching the cloud is a project that uses a device
very similar to the Leap Motion to scale, rotate and traverse a point cloud
using finger pinches. These projects reflect the effectiveness of using
broad movements to overcome the precision issues of the Leap Motion
and reflect why these interactions were successful in their implementation.
11 Chang, Nuernberger, Luan, Hollerer, Gesture-Based Augmented Reality Annotation
IEEE VR 2017 Demo, https://www.youtube.com/watch?v=fMTWqkBdM-0 , accessed
28/02/17 12 IEEE, 2014, Contestants, http://3dui.org/2014/contestants.htm , accessed 28/02/17
19
5.0 Conclusion
The project was a technical success, multiple annotation methods were
investigated and the most appropriate was implemented to investigate its
positives and flaws. Using full hand tracking in the HoloLens using Leap
Motion is an intuitive experience when tracked correctly and provides an
almost tactile sensation when interacting. The performance of the
program is not completely smooth, but is well within the range of usable,
however it is not without its flaws.
Whilst the project generated software that could author highlighting
annotations, the Leap Motion provided to be a somewhat inaccurate tool
for precise annotation. For broader selections using volumes, dragging
objects around can be extremely intuitive, but brushing on annotation
with the users finger whilst doable in testing, is awkward and prone to
tracking loss in practice.
5.1 Future Work Potential extension to this work could be to implement more methods of
broad selection, as they work well on this platform. The slash and swipe
method would fit in these criteria and after seeing the results of this
project, would be a good way to provide fine precision for selection.
Instead of selecting the areas to be highlighted directly, there should be a
broad selection process that can be refined down until the user is satisfied
with their precision. This dampens the overall movement and lessens the
overall effect of minor judders in tracking on the overall annotation.
Another potential extension to this work would be to reimplement the
interaction methods in this project natively in the HoloLens. The work of
Ben Nuernberger 13 shows that native HoloLens tracking is capable of
producing meaningful annotation when formatted correctly. The Leap
13 Chang, Nuernberger, Luan, Hollerer, Gesture-Based Augmented Reality Annotation
IEEE VR 2017 Demo, https://www.youtube.com/watch?v=fMTWqkBdM-0 , accessed
28/02/17
20
Motion was selected for this project because of the freedom it provided in
tracking the whole hand, however the only real gestures used in the
project are pointing and pinching as they proved to be the most intuitive.
This means that the project could theoretically be implemented
standalone without the requirement for this external hardware. The
HoloLens does not track in 3D space anywhere near as well as the Leap
Motion and rather does so mostly in a 2D plane in front of the user,
however, similar annotation methods could potentially be generated to
accommodate this.
21
6.0 References
Chang, Nuernberger, Luan, Hollerer, Evaluating Gesture-Based
Augmented Reality Annotation,
https://www.youtube.com/watch?v=0HacoNGjOtg , date accessed
28/02/17
Chang, Nuernberger, Luan, Hollerer, Gesture-Based Augmented
Reality Annotation IEEE VR 2017 Demo,
https://www.youtube.com/watch?v=fMTWqkBdM-0 , accessed 28/02/17
Defaxer, Unity, 2010, Standard Shader with Vertex Colours,
https://forum.unity3d.com/threads/standard-shader-with-vertex-
colors.316529/, accessed 20/05/17
IEEE, 2014, Contestants, http://3dui.org/2014/contestants.htm ,
accessed 28/02/17
Leap Motion, 2016, Websocket Communication,
https://developer.leapmotion.com/documentation/unity/supplements/Lea
p_JSON.html, accessed 20/04/17
Microsoft, 2016, Microsoft HoloLens,
https://www.microsoft.com/en-us/hololens, accessed 20/05/17
Nuernberger, B., 2017, Benjamin Nuernberger,
http://cs.ucsb.edu/~bnuernberger/, date accessed 28/02/17
Organic Polygons, 2015, Lion Statue, http://organicpolygons.com/,
accessed 20/05/17
Swan, D., 2015, CSIRO swallows up NICTA to create Data61,
http://www.theaustralian.com.au/business/business-spectator/csiro-
swallows-up-nicta-to-create-data61/news-
story/cec8a743d703964f4bbebd63ce28130b?nk=a43987671934270192e
7a243d2d64b91-1494172731 , accessed 15/05/17
Unity, 2017, Manual, https://docs.unity3d.com/Manual/index.html,
accessed 05/03/17
WyrmTale Games, 2015, Dice Pack Light,
http://www.wyrmtale.com/, date accessed 20/05/17
22
7.0 Appendices
INDEPENDENT STUDY CONTRACT Note: Enrolment is subject to approval by the projects co-ordinator
SECTION A (Students and Supervisors)
UniID: u5586286____________
SURNAME: Kookana_____________ FIRST NAMES: Abhishek______________________
PROJECT SUPERVISOR (may be external): Matt Adcock_____________________________________
COURSE SUPERVISOR (a RSCS academic): Weifa Liang_____________________________________
COURSE CODE, TITLE AND UNIT: COMP4560- Advanced Computing Project- 12 Units__________
SEMESTER S1 S2 YEAR: 2017______________
PROJECT TITLE:
Annotation in Augmented Reality
LEARNING OBJECTIVES:
Learning Objectives:
1. Learning the basic concepts and characteristics relevant to AR development.
2. By comparing with VR (with which there is previous experience), understanding the
differences and connection between VR and AR especially from programming perspective
such as how to handle different input data.
3. Developing programming skills to create app for AR within Unity game engine and
engaging in the whole pipeline of software development from requirements and
implementation. Finally, deploying the app on hardware successfully.
4. Based on the existing applications of AR, discovering the potentials and possibilities of
different input methods and their limits and advantages objectively.
5. Running a user evaluation of the system that is developed.
6. Developing my writing skills and perform a mature course report.
23
PROJECT DESCRIPTION:
Augmented Reality, such as is possible with displays like Microsoft HoloLens, allows us to insert 3D
graphics into the physical world. Sometimes we need to interact with these objects. An example of this
is the addition of various types of virtual annotations. Authoring annotations can be useful in tasks like
collaborative decision making, explaining parts of an object to a co-worker, or creating training
material. However, techniques for authoring annotations in AR and VR are not well understood.
This project seeks to prototype a number of spatial user interfaces and interactive techniques that will
accelerate annotation authoring. It will consider a range of different annotation types and should
result in design guidelines and methods for implementing 3D annotation tools. An important
challenge will be to identify a suite of tools that enable as much detail in the input as possible, in
terms of both accuracy and precision.
ASSESSMENT (as per course’s project rules web page, with the differences noted below):
Assessed project components: % of mark Due date Evaluated by:
Report: name style: _____________________________
(e.g. research report, software description..., no less than
45% weight assigned)
45
May 26th (examiner)
Artefact: name kind: ____________________________
(e.g. software, user interface, robot..., no more than 45%
weight assigned)
45
May 26th (supervisor)
Presentation:
10
May 22nd (course convenor)
MEETING DATES (IF KNOWN):
Welcome and introductions (March 6)
Overview and Presentations: Things to avoid and elements which work (March 20)
Initial Project Presentations (March 27)
Report Writing: Structure and Setting Out (April 24)
Implementation and Evaluation Issues (May 1)
Writing Up Reports (May 8)
Getting your project fully back on its wheels (May 15)
Final Project Presentations (May 22)
24
STUDENT DECLARATION: I agree to fulfil the above defined contract:
………………………………………………….. ………………………..
Signature Date
SECTION B (Supervisor):
I am willing to supervise and support this project. I have checked the student's academic record
and believe this student can complete the project.
………………………………………………….. ………………………..
Signature Date
REQUIRED DEPARTMENT RESOURCES:
SECTION C (Course coordinator approval)
………………………………………………….. ………………………..
Signature Date
SECTION D (Projects coordinator approval)
………………………………………………….. ………………………..
Signature Date