sasia2014_haptic_shapes_authorversion.pdf

download sasia2014_haptic_shapes_authorversion.pdf

of 11

Transcript of sasia2014_haptic_shapes_authorversion.pdf

  • 8/10/2019 sasia2014_haptic_shapes_authorversion.pdf

    1/11

    Long, B., Seah, S. A., Carter, T., & Subramanian, S. (2014). Renderingvolumetric haptic shapes in mid-air using ultrasound. ACM Transactions onGraphics, 33(6), [181]. 10.1145/2661229.2661257

    Link to published version ( if available):10.1145/2661229.2661257

    Link to publication record in Explore Bristol ResearchPDF-document

    University of Bristol - Explore Bristol ResearchGeneral rights

    This document is made available in accordance with publisher policies. Please cite only the publishedversion using the reference above. Full terms of use are available:http://www.bristol.ac.uk/pure/about/ebr-terms.html

    Take down policy

    Explore Bristol Research is a digital archive and the intention is that deposited content should not beremoved. However, if you believe that this version of the work breaches copyright law please [email protected] and include the following information in your message:

    Your contact details Bibliographic details for the item, including a URL An outline of the nature of the complaint

    http://dx.doi.org/10.1145/2661229.2661257http://dx.doi.org/10.1145/2661229.2661257http://research-information.bristol.ac.uk/en/publications/rendering-volumetric-haptic-shapes-in-midair-using-ultrasound(ab22e930-bd9d-4480-a85a-83a33bd9b096).htmlhttp://research-information.bristol.ac.uk/en/publications/rendering-volumetric-haptic-shapes-in-midair-using-ultrasound(ab22e930-bd9d-4480-a85a-83a33bd9b096).htmlhttp://dx.doi.org/10.1145/2661229.2661257
  • 8/10/2019 sasia2014_haptic_shapes_authorversion.pdf

    2/11

    Rendering Volumetric Haptic Shapes in Mid-Air using Ultrasound

    Benjamin Long Sue Ann Seah Tom Carter Sriram Subramanian

    Department of Computer Science, University of Bristol, UK *

    Figure 1: Left: Ultrasound is focused to create the shape of a virtual sphere. Middle: Ultrasound pushes the impression of a sphere sectioninto oil. Right: A simulation of the ultrasound at the plane intersected by the hand, where a threshold has been applied to highlight the foci.

    Abstract

    We present a method for creating three-dimensional haptic shapesin mid-air using focused ultrasound. This approach applies the prin-ciples of acoustic radiation force , whereby the non-linear effects of sound produce forces on the skin which are strong enough to gen-erate tactile sensations. This mid-air haptic feedback eliminatesthe need for any attachment of actuators or contact with physi-cal devices. The user perceives a discernible haptic shape whenthe corresponding acoustic interference pattern is generated abovea precisely controlled two-dimensional phased array of ultrasoundtransducers. In this paper, we outline our algorithm for controllingthe volumetric distribution of the acoustic radiation force eld in

    the form of a three-dimensional shape. We demonstrate how wecreate this acoustic radiation force eld and how we interact withit. We then describe our implementation of the system and provideevidence from both visual and technical evaluations of its ability torender different shapes. We conclude with a subjective user evalu-ation to examine users performance for different shapes.

    CR Categories: H.5.2 [Information Interfaces and Presentation]:User InterfacesHaptic I/O.

    Keywords: 3D haptic shapes, tactile displays, acoustic radiationforces

    Links: DL PDF

    * e-mail: {b.long,s.a.seah,t.carter,sriram.subramanian }@bristol.ac.uk

    ACM, 2014. This is the authors version of the work. It is posted here bypermission of ACM for your personal use. Not for redistribution.

    The denitive version was published in ACM Transactions on Graph-ics (Proceedings of SIGGRAPH Asia), Volume 33, Number 6, Article 181,Publication Date: November 2014.

    http://doi.acm.org/10.1145/2661229.2661257

    1 Introduction

    Haptics is a key factor in enhancing presence if we are to createfully immersive and realistic virtual environments [Reiner 2004 ].Haptic feedback has been developed and evaluated for many ap-plications ranging from teleoperation, entertainment, rehabilitationand even surgical training [Hayward et al. 2004] . This diverse useof haptic feedback has led to numerous methods of producing it invirtual reality and augmented reality systems.

    One of the most common techniques for exploring a virtual envi-ronment is using proxy devices such as the SensAble by Phantomor the Falcon by Novint. The SensAble PHANTOM allows theuser to interact with a virtual scene by moving a pen through 3D

    space [ Bianchi et al. 2006] . When the pen comes into contact witha virtual object, an articulated arm attached to the pen provides re-sistance, thereby enabling the user to feel it. This method is alsopossible with magnetic levitation, removing the need for the artic-ulated arm [ Berkelman et al. 1996 ]. However, these techniques arenot ideal for see-through and reach-through displays where imagesare oating in space. Attachments could be worn on the users hande.g. data gloves [ Wusheng et al. 2003; Dipietro et al. 2008] , but thisrequires the process of tting them on before use. This can be cum-bersome and prevents instantaneous user interaction.

    An elegant solution would be to produce tactile sensations in mid-air where users are able to get haptic feedback without contact withany physical object or any actuator attached. Research has shownthat this can be achieved using air jets [Suzuki and Kobayashi2005 ], air vortices [Sodhi et al. 2013 ], and ultrasound [Carter et al.2013 ; Hoshi et al. 2010 ]. Among these three, we have identiedultrasound as the most exible and dynamic method for producingvolumetric haptic shapes.

    In this paper, we present a method for rendering volumetric hapticshapes using focused ultrasound. These haptic shapes can be expe-rienced by the user with their bare hands, giving them the ability towalk-up and use as they do not need any tools or attachments asshown in Figure 1.

    http://doi.acm.org/10.1145/2661229.2661257http://portal.acm.org/ft_gateway.cfm?id=2661257&type=pdfhttp://portal.acm.org/ft_gateway.cfm?id=2661257&type=pdfhttp://doi.acm.org/10.1145/2661229.2661257
  • 8/10/2019 sasia2014_haptic_shapes_authorversion.pdf

    3/11

    Our main contributions are:

    1. We present the rst non-contact haptic feedback system capa-ble of producing feelable three-dimensional shapes.

    2. We improve upon existing algorithms to render large numbersof control points in real time with a predictable frame rate.

    3. We introduce three optimizations to increase the strength of the tactile region.

    4. Through technical and user studies we demonstrate that ouralgorithm works and users can accurately identify 3D shapeswithout visual feedback.

    2 Background and Related Work

    2.1 Mid-Air Haptic Devices

    There have been a number of methods to provide haptic feedback inmid-air but so far none of them have been used to produce shapes.AIREAL [Sodhi et al. 2013 ] is one such technology that uses di-rectable air vortices to create tactile sensations in 3D space. Here,the tactile sensations are of low delity as the area of stimulation islarge and there is a latency to producing the air vortices in a spe-

    cic location. Air jets have also been used, but these lack accuracyand are difcult to control [ Suzuki and Kobayashi 2005 ]. A methodwhich has a higher delity is using ultrasound-based acoustic radi-ation forces [Hoshi et al. 2010; Alexander et al. 2011; Carter et al.2013 ], which produces multiple individually perceivable points of 1 cm diameter in mid-air. We advance on this method to be able tocreate volumetric shapes.

    2.2 Creating Tactile Sensations using Focused Ultra-sound

    The use of focused ultrasound as a non-invasive method to stim-ulate neuroreceptor structures in various parts of the human bodyhas been a topic of research since the early 1970s [ Gavrilov andTsirulnikov 2012] . Dalecki et al. [1995 ] rst proposed the idea of using water-based ultrasound to create tactile sensations on a ngerattached to an acoustic reector oating at the surface of a waterbath. It was later demonstrated that the skin can itself act as anacoustic reector in a medium of air, thus realising the use of air-based focused ultrasound to produce tactile sensations [ Carter et al.2013 ; Hoshi et al. 2010 ].

    These tactile sensations are caused by a non-linear effect of focusedultrasound called acoustic radiation force . The radiation force in-duces a shear wave in the skin tissue, creating a displacement,which triggers the mechanoreceptors within the skin [ Gavrilov andTsirulnikov 2002] . The maximum displacement umax of a mediuminduced by radiation force from a pulse of focused ultrasound isdened by Gavrilov and Tsirulnikov [2002 ] as:

    umax = a

    c l c tt0 I, where t0

    a/c t

    c l c

    2t

    a2 I = c l a2 I = kW, where t0 a/c t (1)

    where a is the radius of the focal region, t0 is the duration of thepulse, ct is the speed of the shear waves propagation, cl is the speedof sound, is the shear elastic modulus, is the absorption coef-cient, I and W are the intensity and acoustical power (both aver-aged over the pulse duration) and k is an amalgamated constant.

    The tactile sensations however cannot be felt continuously unless italso changes continuously with time, as tactile receptors are mainlysensitive to changes skin deformation roughly between 200Hz and

    300Hz [ Gescheider and Wright 2008] . Thus, the ultrasound has tobe modulated at a frequency which corresponds to the peak sensi-tivity of the tactile receptors.

    2.3 Multiple Focal Points using Two DimensionalPhased Arrays

    Two dimensional phased arrays of ultrasound transducers enabletactile sensations to be produced in three dimensions in mid-air.Hoshi et al. [ 2010] describes a system using a linear focusingmethod to dynamically create and move a single focal point. Theyalso suggested a Fourier transform based inverse technique, but thiswould be fundamentally limited to a single plane of feedback par-allel to a well-sampled plane of transducers.

    Based on their method of generating a single focal point, Alexanderet al. [2011 ] created up to four focal points by spatial multiplexing(treating subsections of the array as separate arrays to create sin-gle focal points) and temporal multiplexing (reconguring the arrayto produce single focal points in different places serially in time).Both these methods suffer from either the secondary maxima of multiple focal points constructively interfering with each other thuscreating extra regions of perceivable haptic feedback or converselythe residual ultrasound from a focal point destructively interfering

    with other focal points.Carter et al. [2013 ] proposed a solution to this problem by intro-ducing the concept of null control points, at which the amplitudeof the ultrasound is minimized. Any secondary maxima can thenbe then eliminated by positioning a null control point on them.This solution is an adaptation of a focusing method proposed byGavrilov [2008 ]. Both of these techniques are based on Ebbini andCain [1989 ] creating multiple simultaneous control points, whereina minimum norm step containing an explicit inversion (which istime-consuming and can generate numerical instability) has beenaugmented with a weighting matrix. This minimum norm step con-taining the weighting matrix is then iterated to convergence in orderto achieve maximum power output for a given control point cong-uration.

    Even though Hertzberg et al. [2010 ] optimised Gavrilovs algorithmand improved its efciency with regards to maximising transducerpower, his algorithm still took more than 70 ms to nd a solutionwith only 9 control points. As this technique uses multiple itera-tions to a convergence criterion, the run time can uctuate and asmooth frame rate will be difcult to achieve. This implies that thesolutions previously described are not suitable for rendering volu-metric shapes.

    When interacting with virtual content, a hand will make contactwith several parts of an object at once. It is therefore necessaryto provide three dimensional haptic feedback all across the handwhich is fast enough so that it is not perceived as discontinuous.To create a volumetric shape we will require a far larger numberof control points and an even faster run time. We will also need toregulate the amplitude of each control point and ensure that the ar-ray is efcient at generating these control points. To achieve theseends, our work obviates the costly matrix inversion and the itera-tive reweighting method used in previous work, replacing it with adifferent formulation that is more robust, predictable and efcienton modern parallel hardware. We describe how these changes areeffected in the following section.

    3 Algorithm: Controlling an Acoustic Field

    An array of ultrasonic transducers can be described as a collectionof apertures emitting sound waves of a known frequency. Using

  • 8/10/2019 sasia2014_haptic_shapes_authorversion.pdf

    4/11

    Figure 2: An illustration showing the eld functions that makeup the Rayleigh-Sommerfeld diffraction integral. Top left: Thediffraction convolution function can be interpreted as the result of the wave passing through an innitesimal slit. Distance from theslit z increases to the right. Bottom left: The aperture (shownin white) that the planar wave passes through, in this case thegrille from the upper surface of the transducer. Right: The con-volved function, approximating the wave eld emitted from the sin-gle transducer shown. Again, distance from the aperture z in-creases to the right.

    the Rayleigh-Sommerfeld diffraction integral we can relate knownsound wave phases and amplitudes traveling through the aper-ture with spatially dened phases and amplitudes in the far eld[Gavrilov 2008 ].

    In Section 3.1, we descibe how to synthesize an acoustic eld asthe solution to a linear system. By modelling the output of an ul-trasonic transducer and approximating the near eld behavior, fastalgorithms can be used to compute the numerical integrations in-volved by expressing the diffracted sound wave function from asingle transducer. This can then be used to characterize a singletransducer so that we can solve for a transducer conguration that

    closely reproduces a set of eld values at given points. As theproduced acoustic eld is controlled only at these points, they areknown as control points.

    In Section 3.2 we describe an algorithm to calculate a control pointphase that interferes with nearby control points in a way that in-duces amplitude gain. For the purposes of haptic feedback, con-trolling the amplitude at each sampled control point is important,but the phase is not. We can thus choose an appropriate phase foreach point that interferes constructively with other points that havehigh amplitude requirements and destructively with other pointsthat must be low in amplitude.

    It is also possible that a desired conguration of control point am-plitudes, specied by a phased-array focusing technique, can begenerated with large variances in amplitudes at the source transduc-

    ers. This is unwanted, as running transducers at low power resultsin weaker phenomenon, while using too much power can damagethe array elements. As most ultrasonic arrays power all transduc-ers at the same level, ignoring amplitude, this introduces artifactswhen the amplitude recommendations made by a phased-array fo-cusing technique are normalized away and not followed. We showin Section 3.3 that power demand variance can be penalized so thatpowering the array at full does not cause unwanted artifacting anddetail deterioration.

    In order to produce haptic feedback we must modulate the ultra-sound at a perceptable frequency (as described in Section 2.2) . Due

    Figure 3: When control points are created they have a local resid-ual eld. By modifying the phases of the control points, this eld can be exploited to apply gain to all control points. This is the function of the eigensystem solver. On the left, we show a set of four control points which have defaulted to the same phase setting.On the right, the eigenvector encodes a set of phases that results inadded gain to all control points.

    to this, the transducers have to be modulated by emitting ultrasoundfor half of the time, while being powered down for the remainingduration. This is inefcient, and so we describe in Section 3.4 a

    technique for splitting the output into multiplexed parts to effec-tively increase the strength of the array.

    We then evaluate the algorithm by producing and comparing simu-lations and impressions on the surface of liquids to validate our im-plementation. Finally, we show the performance of the algorithmand its behavior with increasing numbers of control points.

    3.1 Waveform Synthesis Algorithm

    To build a model of the acoustic eld generated by the n ul-trasonic transducers, we assume each transducer emits a planarwave at an aperture conceptualized as a two-dimensional wave-function . The Rayleigh-Sommerfeld integral [ Ebbini and Cain1989 ] gives the far eld behavior of a two-dimensional wavefunc-tion diffracting through an aperture of known geometry. It can beexpressed in three-dimensions as:

    (x , y , z ) = z i (x,y, 0) f ( x, y, z ) dxdy,

    f ( x, y, z ) = eik ( x ) 2 +( y ) 2 +( z ) 2

    (( x)2 + ( y)2 + ( z )2 )3 / 4 , (2)

    in which

    x = x x , y = y y , z = z z (3)where x , y and z dene coordinates relative to the aperture, whilex , y and z dene absolute positions in the far eld, as illustratedin Figure 2. In our case, we dene (x,y, 0) to give a circularsurface of constant phase and unit amplitude.

    The functional form of f permits this to be expressed as a convolu-tion for each slice in z , giving:

    (x , y , z ) = (x,y, 0)f z (x, y ) (4)

    = (x,y, 0) f z ( x, y) dxdy (5)which can be accelerated using the fast Fourier transform (FFT)algorithm [ Nascov and Logof atu 2009] . With this, we can generatea look up table that describes how the amplitude and relative phaseof the sound wave from one transducer changes spatially.

  • 8/10/2019 sasia2014_haptic_shapes_authorversion.pdf

    5/11

    Figure 4: By using both parts of the modulation period, complexmodulated output can be effectively doubled in power by multiplex-ing two acoustic elds. a) The left hand side of the presented feed-back. b) The right hand side of the presented feedback. c) Thetwo elds switching at the modulation frequency required for hap-tic perception and generating a powerful result. d) The result whenmodulating between a single acoustic eld and an unpowered state.The square shown is approximately 10 centimetres across.

    For a transducer q , the sound wave can now be split into four parts:the product of an emission amplitude Aemitq , a phase offset e q , andin the far eld, an amplitude attenuation function A attnq (x , y , z )and a phase difference function eik q ( x ,y ,z ) . The latter two of these are dealt with by the diffraction look-up function, as each of the n transducers are the same:

    (x , y , z ) =

    n

    q=1Aemitq e q Aattnq (x , y , z )eik q ( x ,y ,z ) ,

    =n

    q=1

    Aemitq e q q (x , y , z ), (6)

    At this point we can choose a set of m control point positions in x ,y and z , which we will denote { 1 , . . ., m }, attributing to eacha complex number describing phase and amplitude and assert thatthese are part of the eld (x , y , z ) generated by the ensem-ble of ultrasonic transducers. The emission amplitudes and phaseoffsets required to produce these can then be obtained by speci-fying the necessary simultaneous equations as a complex-valuedAx = b linear system, where:

    A =1 ( 1 ) . . . n ( 1 )...

    . . ....

    1 ( m ) . . . n ( m ), (7)

    the vector x = [ Aemit1 e 1 , . . ., Aemitn e n ]T and b = [ ( 1 ), . . ., ( m ) ]

    T . Once in this form, A emit1 e 1 is a complex coefcientthat can be solved for using either an under-determined minimumnorm solver or over-determined least squares formulation. Such aformulation solves for the optimal set of initial amplitude and phaseoffsets that the transducers are required to emit in order to producethe desired control points.

    Figure 5: Our setup for capturing the impressions of acoustic elds on the surface of oil. We also set up our system to render test shapes at a incline, as shown in this set up.

    However, we are not as interested in reproducing a set of desiredamplitude and phase point measurements as we are in reproducingamplitudes for the purpose of producing a fast tactile sensation. Tocreate a variety of sensations we need to create multiple areas withboth high and low amplitudes to provide a contrast and to controlnoise. To achieve this, we must rst determine a set of compatiblephases that can co-exist above the array at the desired amplitudes.

    3.2 The Control Point Position Phase EigenproblemIn order to create the set of desired amplitudes at given positions,we must rst determine what phases relative to each other the con-trol points must have to most effectively take advantage of localconstructive and destructive interference. Because the complex-valued acoustic eld smoothly changes in space, the spatial and am-plitude relationships between control points also has repercussionsfor phase relationships among control points which can be chosento amplify or dampen the output. For example, control points at thesame amplitude will amplify each other as they move closer if thephase difference is zero, or can alternatively dampen and cancel asthe distance narrows if the phase difference is half a period. This isalso true for the local residual eld surrounding each control point,as is shown in Figure 3.

    We consider a good candidate for the phases of the n control pointsto be represented by the x-vector in an eigenproblem Rx = x .We choose the matrix R to represent the phase shift and amplitudeeffects that an efcient solution for each individual control pointhas on each of the others. Given that we can quite simply nd asolution for any one control point, we then use symbolic algebrato algebraically generate a simplied minimum-norm solution foreach single control point case:

    Aemitq ei q =

    Aattnq ( C q )eik q ( C q ) AC

    ni =0 (A

    attni ( C i )2 )

    , (8)

    where C is the position of the control point, with AC its amplitude,while q is the transducer origin. Using these resulting complexvalues for the transducer emissions with equation ( 6), we generate

    hypothesized single control point elds 1,...,mC . From these elds,we construct the matrix R as:

    R =1 ( C 1 ) . . . m ( C 1 )

    .... . .

    ...1 ( C m ) . . . m ( C m )

    , (9)

    such that both the matrix/eigenvector and eigenvalue/eigenvectorproduct give a vector of the amplitudes and phases of the controlpoints given an amplication eigenvalue. This is estimated withthe assumption that the eigenvector describes a weighted mixing

  • 8/10/2019 sasia2014_haptic_shapes_authorversion.pdf

    6/11

    Figure 6: Sets of shapes and boundaries as a rendered acoustic eld generated by different techniques, which have then been impressed upon the surface of oil. Note that all of these gures have been generated without the use of the modulation efciency technique as described in Section 3.4. A) Linear multiplexing approach. Some points are missing, some misplaced, with prominent secondary maxima. B) Our technique without regularization. Some points are weak, some misshapen. C) Our technique with regularization. Points are less erratic and have reduced noise. 1) Two intersecting rings, each with 16 control points, totalling 32 points overall. 2) A ve-pointed star containing 30

    control points. 3) Two concentric circles, wherein the outer circle is made from 24 points and the center circle is made from 8 points, with 32 points overall. 4) A six-pointed asterisk shape made up of 21 control points.

    of each single control point solution which generates maximallyamplied control points (assuming the local modication effectsare preserved in a global solution).

    Finding a large eigenvalue then corresponds to a large construc-tive amplication of the phases in the eigenvector, which makesusing these eigenvector phases desirable in any later linear systemas the benets of this amplication should generate similar ampli-tudes, and so more efcient transducer usage.

    As only the eigenvector with the largest eigenvalue (amplication)needs to be found, a very simple power method approach can beemployed. Also, as this is a step that makes the array more ef-cient, the method does not need to completely converge, and can beestimated and restarted from a previous solution, enabling a time-bounded solution.

    This ensures that when the solution of the linear system is obtainedthe given control points are able to coexist while minimizing bothunwanted mutual exclusion caused by destructive interference, andnoise caused by constructive interference.

    3.3 Weighted Tikhonov Regularization

    The set of computed aperture wavefunctions for the ultrasoundtransducers can be seen as a linear basis set for the space of allpossible interference patterns for this group of emitters, so the algo-rithm should nd the best pattern to t the desired output. Althoughwe have obtained sets of control points with feasibly chosen phases,our solution method is unaware of the physical power limitations of the array. While we could follow Gavrilov and Hertzberg to opti-mize for the most efcient array power while not exceeding a max-

    imum, their techniques are iterative and rely on the small numbersof control points involved producing thin matrices that are quicklydecomposed. These therefore do not scale well to the large systemsof control points needed for this system.

    In order to nd a balance of solving for the emitter conguration toa scale factor and a reasonable array efciency, we turn to regular-ization techniques. Particularly the Tikhonov regularization tech-nique is of interest because it has both the ability to constrain thesolution of the linear system and so the power requirements, andhas an easily specied matrix augmentation. We can augment our

  • 8/10/2019 sasia2014_haptic_shapes_authorversion.pdf

    7/11

    Figure 7: The relative performance of each of our techniques, asthe time taken to prepare an array update for a given number of control points randomly placed above the array. As can be seen from the graph, the system scales well at high control point counts.The regularized system is more costly to compute and the two-sided modulation takes around twice the time to compute, although thisnarrows when many control points are considered.

    original Ax = b linear system from Section 3.1 as:

    A

    1 . . . 0...

    . . ....

    0 . . . n

    x =

    b

    0...0

    , (10)

    where we have augmented the matrix with a block of diagonalsraised to the power . Calculating appropriate q values can beachieved with:

    q = mi =0

    Aattnq ( C i q )AC im

    (11)

    Care should be taken that q > 0, so that the problem matrix re-mains full rank. The value is now chosen to be a value betweenzero and one. Here zero is a preference for a minimum-norm solu-tion and can be seen as turning off the regularization effect up to ascaling factor, non-integral values give fractional regularization ef-fects. A -value of one then results in an attempt to counter-balancethe full output power of the transducers at the control points. Thissolution species complex transducer output values that are of moreequal amplitude, which results in less variance overall. Due to this,if the transducers are then powered fully, fewer artifacts are created,resulting in appreciably better output.

    3.4 Modulation Efciency

    Having found a set of phases and amplitudes to use to generate anacoustic eld, as previously described they can be made perceptiblevia a low frequency modulation. This means for example, for an200Hz modulation the array is alternately powered for a 1/400 th

    second duration and unpowered for the next 1/400 th second. Anunwanted consequence of this is a loss in power, but the cycle isnecessary for the modulation to be generated.

    Figure 8: The setup of our system for generating three-dimensionalhaptic shapes.

    We circumvent this loss by using both halves of the modulationcycle to contribute to the output. By considering the set of controlpoints to be generated and splitting it into two sets, we can emitthem in an alternating fashion, removing this shortcoming. As thephase eigenvector solution is more effective when the points are

    close by, we use principal component analysis to nd a splittingplane to generate two control point groups that are locally dense.Then by alternating between these two acoustic eld solutions, anoverall perception using many control points can be generated thatis almost twice as powerful as before as can be seen from Figure 4.

    3.5 Algorithm Evaluation

    To visually inspect the acoustic output of the technique, we used anapproach in which we turn the ultrasonic array to face the surfaceof a thin layer of oil, as shown in Figure 5. When the ultrasoundis focused, the oil surface displaces. This displacement can be en-hanced by lighting the surface from a shallow angle, refracting lightthrough the oil and revealing the structure of the acoustic eld ascaustics. This technique is however imperfect as the ultrasound is

    not completely reected from the surface and resonance can occurcausing artifacts that appear similar to noise. Lighting conditionscan also cause the appearance of noise as the setup is sensitive toangle. In spite of the drawbacks, this approach is simple and effec-tive at producing imagery from shaped acoustic elds.

    To evaluate the method, we generated simulations and comparedthem to a visual inspection using the oil impression technique to de-termine the performance of our algorithm both with and without theuse of regularization. For comparison we have also included the re-sults of the linear multiplexing technique, where control points areconsidered singly via a time of ight calculation, summed and thecomplex valued transducer output normalized. This can be writtenas:

    ei q =mi =0 e

    2i ( C i q ) /

    m

    i =0 e2i ( C i q ) /

    (12)

    The results of simulations using each of the three techniques areshown side-by-side for four different shapes in Figure 6.

    In each case, the amplitude of each transducer is considered to beunit, with only the ultrasound phase controllable. Although thismeans that there is no difference in overall power, the delity of theresults and the consistency of control points in each arrangement ismarkedly different.

    In the algorithm comparison shown, we found that although thelinear multiplexing technique was effective for points close to the

  • 8/10/2019 sasia2014_haptic_shapes_authorversion.pdf

    8/11

    Figure 9: Hand-object intersection sampling. a) The hand in thescene touching the virtual cube with the ultrasonic transducer ar-ray position shown below it. b) The hand is converted into sixteen planes, some of which intersect the object in the scene, cutting thehand. c) The hand-object intersections are found as line segmentsand processed into contiguous arcs, from which control points arederived.

    array center, it failed or gave misshapen results for points awayfrom the center, this can be seen from Figure 6 parts A1, A3 andA4. Thisis due to amplication only occuring when points are closetogether or in the center. The differences between the normal andregularized versions of the algorithm are more subtle, for instancethe missing control point in the bottom right of B1, which appearsin C1, and the markedly more regular amplitude of the points inC1. The erroneous control point that appears at the lower right tipof B2, but not in C2, is a further example of the improvement thatthe regularization makes on top of the technique.

    We then tested the update rate of the system with these controlpoint solvers as shown in Figure 7, where we use a GeForce GTX780 Ti graphics card to show the relative speeds of each technique.The regularisation technique was more computationally costly andthe two-sided modulation solution was for small control point setstwice the solution time, but as the sets grow larger the relative dif-ferences narrow slightly.

    From these results we conclude that volumetric shapes can be cre-ated using the algorithm that we have described. In the next sectionwe present the hardware implementation and how the hand-shapeintersection sampling is carried out.

    4 Implementation

    4.1 System Setup

    We used a system consisting of an ultrasonic phased transducer ar-ray actuated by a driver circuit and a hand tracker together with aPC as shown in Figure 8. The algorithms were implemented withOpenCL on a GPU.

    The ultrasound is emitted from an array of 320 piezoelectric trans-ducers. These are driven by 5 interconnected driver boards eachwith two processors. The computed phase delays and amplitudevalues are sent from the PC to the USB controller. This consists of

    a USB interface and a processor. This sorts the received data andforwards it on to the processor controlling the corresponding trans-ducers. All of the processors have synchronized clocks. They thenproduce one square wave output for each transducer in accordancewith the phase delays and amplitude values. These output signalsare amplied from 5V to 15V before leaving the driver board.

    We used XMOS L1-128 processors running at 400MHz. The out-puts had a refresh rate of 2MHz. We chose muRata MA40S4Stransducers as they produce a large amount of sound pressure (20Pascals at a distance of 30cm) and have a good angle of directivity(60 ).

    Figure 10: The shapes involved in the user study. Shape a) wasused as a training shape, before the study was carried out on shapesb) to f). Each of the shapes was scaled equally on all axes to t a10cm cube. The white cuboid indicates the position of the array.

    To nd hand and virtual object intersections, we used a Leap Mo-tion Controller [Leap Motion Inc. 2012] . The Leap Motion Con-troller has a range of 100cm and a eld of view of 140 degrees, andis specialized for hand tracking making it suitable for our require-ments.

    4.2 Hand-Object Intersection Sampling

    3D shape recognition is highly dependent on edges and vertices[Plaisier et al. 2009 ]. In order to create the most effective cues forshape in a volumetric space, we must generate an edge analogueto facilitate shape recognition in mid-air. To do this, we must de-tect the interactions between hands and shape boundaries, so thatthe acoustic eld that we generate can create the necessary hapticfeedback.

    From the hand model provided by the Leap Motion Controller, we

    take the bone and joint positions to create the model that we usefor shape boundary intersection. Each hand contains sixteen planarquadrilaterals, a palm polygon and three separate polygons for eachnger.

    These object-hand interactions, as shown in Figure 9, result inpolygon-polygon intersections, and the resulting intersection prim-itive is a line segment. These are then assembled into contiguousarcs, which can be sampled to produce control points. Differentsampling strategies were then employed to optionally enhance re-gions of high curvature.

    Curvature Adaptive Parameterization

    One method of modifying the sampling from a simple uniform den-sity approach is to change the control point density dynamically bycorrelating control point density with curvature. This effectivelyincreases the strength of the haptic feedback at areas of high curva-ture which serves to draw attention to geometrically salient featuresof shapes.

    We determine these features by considering curvature approxima-tions on meshes. As we are primarily dealing with small, simplemeshes, we turn to the identication of local angular defects (thevariation in angular sum of the surrounding triangles from 2 ) inthe mesh as our indicator function.

    To tie this to control point density, we interpolate the curvature

  • 8/10/2019 sasia2014_haptic_shapes_authorversion.pdf

    9/11

    Figure 11: The user study setup.

    along mesh edges. Then at the point where the mesh and handinteract, the polygon-polygon intersection occurs, and line segmentthat is generated has its local curvature indicator specied by:

    i (t) = 0 ec( t 0 t ) + 1 ec( t

    t 1 ) (13)

    where t is the parametric coordinate of the line, t0 the point inter-secting the mesh edge at the beginning of the line, t1 the point in-tersecting the mesh edge at the end, c a decay constant, 0 the edgecurvature at the beginning and 1 the curvature at the end. The con-stant c is chosen such that as the intersected straight line segmentbecomes longer, the curvature interpolated from the endpoints issubject to exponential decay causing the curvature sampled fromthe center of a at polygon tend towards zero.

    This curvature i (t) can then be used to specify local control pointdensity, where the minimum control point density is expressedwhen the curvature is zero, and a higher control point density isused when the curvature is large.

    5 User Study

    In Section 3.5, we determined that the system is capable of pro-

    ducing shapes. However, as there have been no studies performedfor ultrasonic haptic shapes, we need to evaluate the effectivenessof our system in conveying the shape information. Thus, we con-ducted an experiment where participants were asked to identify vol-umetric shapes by active exploration without any visual feedback inthe volume above the transducer array.

    We generated six volumetric shapes in total as shown in Figure 10:(a) cone, (b) sphere, (c) pyramid, (d) horizontal prism, (e) verticalprism and (f) cube. All the shapes were contained in a cubic regionwith 10cm sides at 17cm height centred above the transducer array.The region containing shapes therefore extends between 12cm and22cm above the array surface. We decided to use 15mm arc lengthper control point as this gives a good trade-off between actuatedvolume and number of control points for a uniform distribution.

    From our trial runs, we found that it was easy to confuse the coneand the pyramid due to their similar footprints during hand-shapeintersections. Without any visual cues, it is difcult to identify theexact location of parts of the shape resulting in misinterpretation of the haptic feedback. Thus, the cone was used as a practice shape.Each participant was shown a printout of all the six shapes as a listof choices. Participants were instructed to explore the area of abovethe array and asked to guess which shape that they thought it wasamong the six shapes. As this was the practice, they were informedwhether or not they were correct. After that, they were asked toexplore the cone for a few more minutes until they were condentthat they could identify the shape well.

    Figure 12: Percentage accuracy across all the participants for each shape. Error bars denote +/- standard error of the mean

    Once the practice was completed, the actual tests were run with theve other shapes i.e. (b) to (f). Each of the ve shapes was repeatedthree times for a total of 15 trials and the trials were presented ina randomized order. Participants were informed that the cone willnot be included in the test and were not told if their answers werecorrect after each trial. Additionally, the participants were not in-formed on the number of trials nor if all the shapes will be present.They also wore headphones that played white noise to mask any au-dible cues from the system. Figure 11 shows the setup of the userstudy.

    Results

    Six participants (all males with aged range 27 to 35) took part inthe user study which lasted about 25 minutes. They were all coin-cidentally right-handed. None of them have previously performedany studies involving ultrasonic haptic shapes.

    The performance of each participant in correctly identifying all theshapes ranged from 66.7% to 100% (mean 80, SD 12.6). Figure12 shows the percentage accuracy for each shape across all the par-ticipants (out of 18 trials). The pyramid was the easiest to identify(94.4%, SEM 5.6%) and the sphere the least (61.1%, SEM 10.2%).Table 1 showed which shapes were more likely to be confused bythe participants. The pyramid was never confused with either thehorizontal prism, vertical prism or the cube. The horizontal prismwas never confused with the vertical prism or the sphere. The ver-tical prism was never confused with the cube or the pyramid.

    S p h e r e

    P y r a m

    i d

    H o r

    i z o n t a l P r i s m

    V e r t

    i c a l

    P r i s m

    C u b e

    Sphere 11 2 4 1Pyramid 1 17

    Horizontal Prism 3 14 1Vertical Prism 1 1 16

    Cube 1 2 1 14

    Table 1: Confusion matrix showing the shapes that were most fre-quently confused across all the participants (out of 18 trials for each shape)

  • 8/10/2019 sasia2014_haptic_shapes_authorversion.pdf

    10/11

    There are perceptual issues associated with the localisation of ul-trasonic haptic feedback with the lack of visual feedback whichwould have resulted in the misidentication of the shapes [Wil-son et al. 2014 ; Hoshi et al. 2010 ]. Research has shown that ex-ploratory strategies such as hand motion, contour following or one-nger/one-hand can also affect information derived from an ob- jects shape [ Lederman and Klatzky 1987] . In this study, the partic-ipants were never informed of any technique to help them identifya shape.

    As can be seen from the results, even with very little training andusing naive users, the participants weregenerally successful at iden-tifying the shapes without any visual feedback. Overall, the resultsdemonstrate that our system is efcient in rendering perceivablehaptic shapes.

    6 Applications

    Inaccessible objects such as those in museum cases or inside thehuman body, can be visually explored through bi-directional mir-rors or neurosurgical props. While these methods allow the user tointeract with the objects and intersect them with their hands, theyoffer no haptic feedback. Augmenting with our system enables su-perior spatial learning and the ability to highlight valuable infor-

    mation through haptic feedback. Figure 13 (left) depicts a surgeonexploring a CT scan with haptic feedback allowing them to feeltumors.

    Touchless Interfaces are becoming increasingly common. Theyafford an intuitive, exible user interface but lack the haptic feed-back provided by physical buttons and controls. In many situations,such as in the cockpit of a vehicle (see Figure 13 (center)), the userwill want to operate the system while their eyes are busy on an-other task. Integrating our system would allow the user to feel thegeometry of an interface and localize on a specic item.

    Virtual Reality has long been a goal of interactive systems. Hap-tic feedback provides our sense of proprioception, kinesthesia andtouch making it essential for an effective system. Recent advancesin head mounted displays have greatly improved the realism of thevisual feedback, yet haptic feedback still requires proxy or wearablehaptic devices. This unnatural disconnection breaks the immersionof a virtual reality. Our system would enable users to freely ex-plore the virtual world unencumbered while receiving haptic feed-back from the objects that they interact with, as shown in Figure 13(right).

    7 Limitations and Future Work

    We have demonstrated a system for producing volumetric hapticshapes in mid-air. We have also shown that users are capable of

    discriminating and identifying volumetric haptic shapes with highaccuracy. Nonetheless, the ultrasonic haptic feedback technologyhas some drawbacks.

    Firstly, the shape created must be in the working volume of the de-vice for it to function correctly. Too far from the device, or movingthe shapes out of the working volume, and the device loses power,being able to focus less and less as the users hands moves outsideof the active region. As the pressure drops off as the reciprocalof distance from the transducer, this implies that the intensity andpower obey an inverse square law with distance. We nd that cur-rently our system peforms best between 15cm and 50cm.

    Secondly, our system only functions with hands. Other parts of the human body can have more difculty detecting a particular fre-quency of vibrations generated by the device or are less sensitiveto vibration generally. Given this, our system implementation islimited to hand-based haptics.

    Thirdly, we are limited by the size andpower of the transducer arrayin the number and strength of control points. Increasing the numberof control points reduces the strength of all control points with axed number of transducers. This implies that there is an importanttrade-off between the point sampling density and rendering qualitywhen generating shapes. While this is in part ameliorated by onlyrendering the hand-object intersections to decrease the control pointcount, the problem remains. In this light, the balance between bothproperties in which the shape identication is most efcient is yetto be investigated.

    For future work, we planto investigate this issue to determine wherethe optimal balance between strength and rendering quality lies.By exploiting this point, further increases in shape complexity andsubtlety could be realized.

    8 Conclusions

    This paper demonstrated that our algorithm can control the volu-metric distribution of the acoustic radiation force eld when us-ing a two-dimensional phased ultrasound array. By controlling theacoustic eld, we are able to produce a volumetric haptic shapes inmid-air. This algorithm is capable of running interactively, allowingus to create real-time haptic sensations and dynamically changingshapes if we wanted to.

    The capability to create mid-air haptic shapes is advantageous as itallows the user to walk-up and use the system. The user does notneed to wear any attachments or be restricted by the use of any toolsthus encouraging spontaneous use and allowing freedom to explore.This is the rst algorithm that we know of that creates and controlsfeelable three-dimensional shapes using a volumetric distributionof ultrasound.

    Figure 13: Left: Our system adds haptic information to the results of a CT scan. Center: Augmenting the dashboard of a car with our systemenables eye-busy interactions. Right: Opening a door to a room lled with monsters in a game becomes more immersive with our system.

  • 8/10/2019 sasia2014_haptic_shapes_authorversion.pdf

    11/11

    Acknowledgements

    We thank Matt Sutton for taking the images and the video. Thiswork is supported by European Research Council (Starting GrantAgreement 278576) under the Seventh Framework Programme.

    References

    A LEXANDER , J. , M ARSHALL , M. T., AND SUBRAMANIAN , S.2011. Adding haptic feedback to mobile tv. In CHI 11 Extended Abstracts on Human Factors in Computing Systems , ACM, NewYork, NY, USA, CHI EA 11, 19751980.

    B ERKELMAN , P. J., B UTLER , Z. J., AN D HOLLIS , R. L. 1996.Design of a hemispherical magnetic levitation haptic interfacedevice. In in Proceedings of the ASME Winter Annual Meet-ing, Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems , 1722.

    B IANCHI , G., K NOERLEIN , B., S Z EKELY , G., AND HARDERS ,M. 2006. High precision augmented reality haptics. In Proc. EuroHaptics , 169178.

    C ARTER , T., S EA H , S. A., L ON G , B., D RINKWATER , B., AN DSUBRAMANIAN , S. 2013. Ultrahaptics: multi-point mid-air

    haptic feedback for touch surfaces. In Proceedings of the 26thannual ACM symposium on User interface software and technol-ogy, ACM, 505514.

    DALECKI , D., C HILD , S., R AEMAN , C., AN D C ARSTENSEN , E.1995. Tactile perception of ultrasound. The Journal of theAcous-tical Society of America 97 , 3165.

    D IPIETRO , L., S ABATINI , A. M., A ND DARIO , P. 2008. A surveyof glove-based systems and their applications. Trans. Sys. ManCyber Part C 38 , 4 (July), 461482.

    E BBINI , E. S., AND CAI N , C. A. 1989. Multiple-focus ultra-sound phased-array pattern synthesis: optimal driving-signal dis-tributions for hyperthermia. Ultrasonics, Ferroelectrics and Fre-quency Control, IEEE Transactions on 36 , 5, 540548.

    G AVRILOV , L., AN D TSIRULNIKOV , E. 2002. Mechanisms of stimulation effects of focused ultrasound on neural structures:Role of nonlinear effects. Nonlinear Acoustics at the Beginningof the 21st Century 1 , 445448.

    G AVRILOV , L., A ND T SIRULNIKOV , E. 2012. Focused ultrasoundas a tool to input sensory information to humans (review). Acous-tical Physics 58 , 121.

    G AVRILOV , L. 2008. The possibility of generating focal re-gions of complex congurations in application to the problems of stimulation of human receptor structures by focused ultrasound. Acoustical Physics 54 , 269278.

    G ESCHEIDER , G., AND W RIGHT , J. 2008. Information-processingchannels in the tactile sensory system: A psychophysical and physiological analysis . Psychology Press.

    H AYWARD , V., A STLEY , O., C RUZ -H ERNANDEZ , M., G RANT ,D. , AN D R OBLES -D E -L A -T ORRE , G. 2004. Haptic interfacesand devices. Sensor Review 24 , 1, 1629.

    H ERTZBERG , Y., N AOR , O., V OLOVICK , A., AND SHOHAM , S.2010. Towards multifocal ultrasonic neural stimulation: pat-

    tern generation algorithms. Journal of neural engineering 7 , 5,056002.

    H OSHI , T., T AKAHASHI , M., I WAMOTO , T., AN D S HINODA , H.2010. Noncontact tactile display based on radiation pressure of airborne ultrasound. IEEE Transactions on Haptics 3 , 3, 155165.

    L EA P MOTION INC ., 2012. Leap Motion Controller. http://leapmotion.com .

    L EDERMAN , S. J., A ND K LATZKY , R. L. 1987. Hand movements:A window into haptic object recognition. Cognitive psychology19, 3, 342368.

    NASCOV , V., A ND LOGOF ATU , P. C. 2009. Fast computation algo-rithm for the Rayleigh-Sommerfeld diffraction formula using atype of scaled convolution. Appl. Opt. 48 , 22 (Aug), 43104319.

    PLAISIER , M. A., B ERGMANN TIEST , W. M., AND KAPPERS ,A. M. L. 2009. Salient features in 3-d haptic shape perception. Attention, Perception, & Psychophysics 71 , 2, 421430.

    R EINER , M. 2004. The role of haptics in immersive telecommuni-cation environments. IEEE Transactions on Circuits and Systems for Video Technology 14 , 3 (march), 392401.

    SODHI , R., P OUPYREV , I., G LISSON , M., A ND I SRAR , A. 2013.Aireal: interactive tactile experiences in free air. ACM Transac-tions on Graphics (TOG) 32 , 4, 134.

    SUZUKI , Y., AN D KOBAYASHI , M. 2005. Air jet driven forcefeedback in virtual reality. ComputerGraphics and Applications, IEEE 25 , 1 (Jan), 4447.

    W ILSON , G . , C ARTER , T. , S UBRAMANIAN , S . , AND BRE W -STER , S . A. 2014. Perception of ultrasonic haptic feedback on the hand: Localisation and apparent motion. In Proceed-ings of the 32Nd Annual ACM Conference on Human Factorsin Computing Systems , ACM, New York, NY, USA, CHI 14,11331142.

    W USHENG , C., T IANMIAO , W., AN D LEI , H. 2003. Design of data glove and arm type haptic interface. In Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2003. HAPTICS 2003. Proceedings. 11th Symposium on , 422 427.

    http://leapmotion.com/http://leapmotion.com/http://leapmotion.com/http://leapmotion.com/http://leapmotion.com/