Haptic Interaction with Objects in a Picture Based on ...INDEX T ERMS: H.5.2 [Information Interfaces...

6
Haptic Interaction with Objects in a Picture based on Surface Normal Estimation Seung-Chan Kim and Dong-Soo Kwon Telerobotics and Control Laboratory, KAIST, Korea ABSTRACT In this paper we propose a haptic interaction system that physically represents the underlying geometry of objects displayed in a 2D picture, i.e., a digital image. To obtain the object’s geometry captured in the picture, we estimate the physical transformation between the object plane and the image plane based on homographic information, which locates the face of the object that is viewed. We then calculate the rotated surface normal vector of the object’s face and place it on the corresponding part in the 2D image. The purpose of this setup is to create a force that can be rendered along with the image without distorting the visual information. For example, if a user is touching or exploring a 2D image, he/she receives resistive force when touching a slanted face. We evaluated the proposed haptic rendering scheme using a set of pictures of objects with different orientations. The experimental results show that the participants reliably identified the geometric configuration by touching the object in the picture. We conclude with developed applications of the proposed algorithm. KEYWORDS: object geometry, image understanding, resistive force, force display INDEX TERMS: H.5.2 [Information Interfaces and Presentation]: User Interfaces - Haptic I/O, Interaction styles, Graphical user interfaces (GUI). 1 INTRODUCTION Images are one of the most popular forms of media, as they represent and convey our everyday experiences. The recent development of tablet computers has made interactions with images easier than ever before. We believe that these devices increase not only image accessibility but also the desire to touch and interact physically with the objects that reside in the picture. One way of allowing 2D images to provide systems with physical information may be to use an additional definition of corresponding haptic textures [12, 13]. In terms of haptic texturing, both procedural and image-based texturing approaches have been widely adopted in previous research to define haptic fields. The procedural texture or depth field is defined as a computer-generated image created from a mathematical model. One of the important characteristics of procedural texturing is synthesis, which means that a texture can be generated from a model rather than from a digitized or fixed texture image [19]. Because this type of synthesis is parameterized, it has no fixed resolution and allows more control compared to an image-based model. For this reason, this approach has been widely employed for generating haptically meaningful patterns [6, 11, 23]. However, some issues related to parameter tuning and aliasing have arisen in this area [19]. Image-based approaches have also been employed in haptics for generating force fields [5, 7, 12, 22] and tactile fields [13, 25]. Although using a fixed companion image generates static texture fields, it has also been widely adopted in haptics in that it is not only intuitive in terms of information mapping but also efficient in terms of handling. Another way may be to extract or estimate haptically meaningful features, such as edges and texture patterns, from the image per se. Early pioneering work by Minsky et al. proposed a method that haptically translates and represents grayscale images [18]. Their system could simulate haptic texture or bumps by providing force information to the user’s hand based on the gradient features of objects. Another recent work adopted a similar mapping scheme to induce perceptual illusions in the z direction while the user is exploring a flat surface [22]. To generate haptic textures in real time, many researchers have proposed a wide variety of feasible haptic rendering methods based on incoming visual information (i.e., moving pictures from a camera). An earlier work proposed a vibrotactile display that substitutes visual information with tactile stimuli [1]. It is intended to deliver haptic feedback to the user based on an analysis of the incoming visual scene from a camera. The authors demonstrated the possibility of using a tactile apparatus to display visual information. More recently, Israr et al. proposed a visuo-tactile sensory substitution device mainly for the visually impaired [10]. Their system is Seung-Chan Kim is with Telerobotics and Control Laboratory, KAIST, Daejeon, Korea. (e-mail: [email protected]) Dong-Soo Kwon is with Telerobotics and Control Laboratory, KAIST, Daejeon Korea. (Corresponding author to provide phone: +82-42-350-3042; fax: +82-42-350-8240; e- mail: [email protected]) Figure 1. An example of haptic interaction with 2D image based on the surface normal estimation. In this application, a user receives resistive forces according to the estimated geometric features while touching and exploring the planar image space. 49 IEEE World Haptics Conference 2013 14-18 April, Daejeon, Korea 978-1-4799-0088-6/13/$31.00 ©2013 IEEE

Transcript of Haptic Interaction with Objects in a Picture Based on ...INDEX T ERMS: H.5.2 [Information Interfaces...

Page 1: Haptic Interaction with Objects in a Picture Based on ...INDEX T ERMS: H.5.2 [Information Interfaces and Presentation]: User Interfaces - Haptic I/O, Interaction styles, Graphical

Haptic Interaction with Objects in a Picture

based on Surface Normal Estimation

Seung-Chan Kim and Dong-Soo Kwon

Telerobotics and Control Laboratory, KAIST, Korea

ABSTRACT In this paper we propose a haptic interaction system that physically represents the underlying geometry of objects displayed in a 2D picture, i.e., a digital image. To obtain the object’s geometry captured in the picture, we estimate the physical transformation between the object plane and the image plane based on homographic information, which locates the face of the object that is viewed. We then calculate the rotated surface normal vector of the object’s face and place it on the corresponding part in the 2D image. The purpose of this setup is to create a force that can be rendered along with the image without distorting the visual information. For example, if a user is touching or exploring a 2D image, he/she receives resistive force when touching a slanted face. We evaluated the proposed haptic rendering scheme using a set of pictures of objects with different orientations. The experimental results show that the participants reliably identified the geometric configuration by touching the object in the picture. We conclude with developed applications of the proposed algorithm. KEYWORDS: object geometry, image understanding, resistive force, force display INDEX TERMS: H.5.2 [Information Interfaces and Presentation]: User Interfaces - Haptic I/O, Interaction styles, Graphical user interfaces (GUI).

1 INTRODUCTION Images are one of the most popular forms of media, as they represent and convey our everyday experiences. The recent development of tablet computers has made interactions with images easier than ever before. We believe that these devices increase not only image accessibility but also the desire to touch and interact physically with the objects that reside in the picture. One way of allowing 2D images to provide systems with physical information may be to use an additional definition of corresponding haptic textures [12, 13]. In terms of haptic texturing, both procedural and image-based texturing approaches have been widely adopted in previous research to define haptic fields. The procedural texture or depth field is defined as a computer-generated image created from a mathematical model. One of the important characteristics of procedural texturing is synthesis, which means that a texture can be generated from a model rather than from a digitized or fixed texture image [19]. Because this type of synthesis is parameterized, it has no fixed

resolution and allows more control compared to an image-based model. For this reason, this approach has been widely employed for generating haptically meaningful patterns [6, 11, 23]. However, some issues related to parameter tuning and aliasing have arisen in this area [19]. Image-based approaches have also been employed in haptics for generating force fields [5, 7, 12, 22] and tactile fields [13, 25]. Although using a fixed companion image generates static texture fields, it has also been widely adopted in haptics in that it is not only intuitive in terms of information mapping but also efficient in terms of handling. Another way may be to extract or estimate haptically meaningful features, such as edges and texture patterns, from the image per se. Early pioneering work by Minsky et al. proposed a method that haptically translates and represents grayscale images [18]. Their system could simulate haptic texture or bumps by providing force information to the user’s hand based on the gradient features of objects. Another recent work adopted a similar mapping scheme to induce perceptual illusions in the z direction while the user is exploring a flat surface [22]. To generate haptic textures in real time, many researchers have proposed a wide variety of feasible haptic rendering methods based on incoming visual information (i.e., moving pictures from a camera). An earlier work proposed a vibrotactile display that substitutes visual information with tactile stimuli [1]. It is intended to deliver haptic feedback to the user based on an analysis of the incoming visual scene from a camera. The authors demonstrated the possibility of using a tactile apparatus to display visual information. More recently, Israr et al. proposed a visuo-tactile sensory substitution device mainly for the visually impaired [10]. Their system is

Seung-Chan Kim is with Telerobotics and Control Laboratory, KAIST, Daejeon, Korea. (e-mail: [email protected]) Dong-Soo Kwon is with Telerobotics and Control Laboratory, KAIST, Daejeon Korea. (Corresponding author to provide phone: +82-42-350-3042; fax: +82-42-350-8240; e-mail: [email protected])

Figure 1. An example of haptic interaction with 2D image based on the surface normal estimation. In this application, a user receives resistive forces according to the estimated geometric features while touching and exploring the planar image space.

49

IEEE World Haptics Conference 201314-18 April, Daejeon, Korea978-1-4799-0088-6/13/$31.00 ©2013 IEEE

Page 2: Haptic Interaction with Objects in a Picture Based on ...INDEX T ERMS: H.5.2 [Information Interfaces and Presentation]: User Interfaces - Haptic I/O, Interaction styles, Graphical

intended to extend the user’s reach with tactile feedback so that the user can touch their surrounding worlds on a touch screen. Indeed, a series of experiment showed that the participants could identify target objects remarkably well. The methods described thus far are generally suitable for describing parts in which abrupt changes exist, such as edges and fine-textures. In this paper, we focus on the extraction of rather macro-levels of geometry that appear in a picture. Such macro-geometry features defined in the present study are slanted planar objects that range from walls, books, and tables to planar patches that compose an object, such as a curved wall. To do this, we first estimate the pose of an object and then calculate the oriented surface normal vector. During the interaction phase, the level of resistive forces is determined based on the direction of the user’s motion with respect to the estimated geometry. Finally, we evaluate the proposed haptic rendering scheme using a set of pictures of everyday objects.

2 HAPTIC INTERACTION WITH 2D IMAGE

2.1 Overview Although all information regarding objects is projected onto the planar space, we can generally understand the geometric structures when we look at pictures. The main objective of the present study is to enable the objects in the picture to become haptically interactive, as if the objects are located in front of the viewer. One constraint of our approach is not to distort visual information, meaning that our algorithm is intended to overlay haptic features onto an original image (i.e., not a 3D visual reconstruction). The proposed approach begins by estimating the geometric configuration of the objects. For example, a partially opened door in front of a viewer is a rectangular face oriented on its y-axis from the viewpoint of the viewer. We interchangeably use the terms of extrinsics and physical transformation to describe the transformation between the object plane and image plane (the camera’s viewpoint). Once we obtain the relationship, we render it using resistive force. For example, when manually exploring a 2D picture, we apply a resistive force if the 2D cursor is moving against a slanted surface. This sense is adopted from our everyday manual interactions with faced objects. Minsky et al. explained this sense when they reported, “It is very difficult to move to the top of a bump and easy to fall off the bump’s back into a lower region of a simulated surface” [18]. They actually implemented this concept by pulling the user’s hand towards a low region and away from a high region based on the local gradient value of a depth map. Some previous studies in haptics [18, 20, 22] adopted a similar concept to render 3D objects using lateral force instead of normal force. Another reason we employ a lateral rendering scheme here is that it fits with our interaction styles on 2D images. Note that one of the typical exploratory movements on the flat surface is lateral motion [14]. Figure 2 shows our rendering scheme, which utilizes the slope captured in the picture to generate the resistive force.

EasyDifficult

Slanted surface w.r.t image plane

Object space

Image space

x

X

Figure 2. Haptic rendering of the slanted surface. The proposed system provides forces either in the direction or against the direction of the motion depending on the geometry captured in the picture.

From the next section, we describe the estimation process through which physical transformation is determined.

2.2 Physical Transformation from Picture We estimate the physical transformation between the object plane and the image plane based on a pose estimation process, which is widely utilized for computer vision applications and even for input devices such as a 3D joystick [8, 24]. If we define X ( 3 ) as a point in the object space (real world) and x ( 2 ) as a point on the image, or X ( 3 ) and x ( 2 ) in homogeneous coordinates, respectively, the relationship between two spaces can be described as

|sH sK Rx X t X

1 2

00

1 0 0 11

x x

y y

Xx f c

Yy s f c

Z3r r r t

(1)

where s is the scale factor and H is the homography matrix, which contains both physically transformed parts, i.e., the rotation R, the translation t and the camera matrix K. Each r is a 3×1 column vector that consists of a rotation matrix R. fx and fy are focal lengths and cx and cy are image centers, each of which can be acquired from the calibration process. The last four parameters construct the camera matrix (intrinsic part), K. Because the objects used in our approach are mainly planar objects, we can reduce the equation by replacing Z with 0 [4].

2.3 Generation of a Normal Vector Field Based on the estimated physical transformation, [R | t], we generated and overlaid a surface normal field on the image. Because we are only interested in how much the surface is oriented and not in how far away the surface is located, only the rotation part is considered in this step. To calculate a surface normal vector of a surface of interest, we constructed a 4×4 homogeneous matrix W and multiplied it by a 4×1 homogeneous normal vector n = [0,0,1,1]T to obtain the transformed result as

50

Page 3: Haptic Interaction with Objects in a Picture Based on ...INDEX T ERMS: H.5.2 [Information Interfaces and Presentation]: User Interfaces - Haptic I/O, Interaction styles, Graphical

T Wn n

3

3 3

00 1 0 1T T

R RW

t

(2)

where n is the 4×1 homogeneous original surface normal of a

face in the object space and Tn is the homogeneous 4×1 transformed surface normal in the image space. This process is done for each object’s face of interest.

2.4 Haptic Rendering To determine the haptic force value at a given location during the interaction phase, we considered both a) the geometric information at the touched point of (x,y) in the form of the surface normal Tn , as estimated thus far, and b) the user’s motion direction. We quantified the user’s motion on the 2D image space as the unit direction vector of motion im ( 2 ) in the xy space.

imˆ /i i im m m

Figure 3. Unit direction vector as the motion vector.

Then, the force at a given position of (x, y) in the image space is calculated by an algebraic operation of two vectors, as shown below.

ˆ( , ) TF x y n m (3) where Tn is a 3×1 non-homogeneous rotated vector converted

from Tn . With the given geometric feature, which is static throughout the interaction phase, the unit direction vector of motion determines the magnitude of the force and its type, i.e., whether it is resistive or assistive.

Uphill =Resistive force

Downhill =Assistive force

Along the same level =No feedback

Figure 4. Physical meaning of the proposed interaction. Whether the cursor controlled by the force interface is climbing up or down is determined by Eq. (3). Note that all user interaction takes place in the 2D image space.

Importantly, this method can be easily applied to many types of haptic displays that can render a resistive force. In the next session, we will describe an experiment with two different conditions that evaluate our method.

3 EVALUATION This section evaluates whether users can identify geometric information from an image with haptic feedback. In contrast to the previous work on geometry task [17], we focused on relatively large objects which require active exploration to be identified tactually. During the experimental sessions, participants are asked to judge how much the 3D cube is rotated after touching the screen using the force interface.

3.1 Stimulus We used a set of pictures that contain a box (cuboid) 20cm in length with different orientations as a visual stimuli set. This box is hidden from participants in the haptic-only condition. The box is oriented on its y-axis from 15 degrees to 75 degrees in increments of 5 degrees. Pictures with angles from 50 degrees to 75 degrees are generated by flipping the images horizontally.

15

25 30

35 40 45

20

y

x

z

Figure 5. A set of pictures of the box used in the experiment.

As shown in Figure 5, each of the boxes has two visible faces. Each vertex set of each face, which constructs x ( 2 ), is recognized by manual selection in the experiment after compensating for lens distortion. This setup is intended to minimize errors that might arise from the image-processing procedure. The ratio between the estimated surface normals and exact solutions were 0.99 (x in left), 1.03 (z in left), 1.02 (x in right), and 0.99 (z in right). For more practical settings, robust feature-detection algorithms can be applied [3]. The force values calculated according to Eq. (3) are scaled so that they are bounded between ± 0.6 N, which produces a reasonable feeling as regards the image.

3.2 Apparatus

3.2.1 Force interface We employed a haptic arm, the SensAble PHANToM Omni ™, to control the cursor on the screen and deliver the force feedback to the participants. The mouse cursor in a 2D screen space is controlled by linearly mapping the xy plane 100W × 100H (mm) in size to the screen region with a mapping gain of 6.4 px/mm. The participant’s movement in the z direction was not used for controlling the cursor.

3.2.2 Answering device To determine what participants felt during the experimental sessions efficiently, we developed a tangible interface that allows participants to rotate its box-shaped knob. This setup was intended to minimize the error when reporting the perceived geometric configuration. Once the participants record the answer by pressing a key on the keyboard, the rotated knob returns to its original position (i.e., zero degrees) using the motor/encoder system (MX-28, Robotis Co.) attached to the underside of the knob part.

51

Page 4: Haptic Interaction with Objects in a Picture Based on ...INDEX T ERMS: H.5.2 [Information Interfaces and Presentation]: User Interfaces - Haptic I/O, Interaction styles, Graphical

Motor/Encoder set(MX 28)

Knob

8cm

8cm

Figure 6. Developed answering interface. Participants are instructed to rotate the box-shaped knob as they perceived when reporting an answer instead of talking about or drawing the perceived geometric configuration.

3.2.3 Procedures We designed a between-subject experiment. Participants from the first group participated in an experiment with a vision-only condition (V). Without any haptic information provided, they were instructed to judge the orientation of the object as they have seen in the screen. The objective of this setup was to measure both the capability of visual recognition as reference and the performance of the answering interface. Participants from the second group conducted experiments with haptic-only conditions (H), which provided haptic feedback according to Equation (3). Test with visual stimuli (V) Before the experiment, the experimenter explained to each subject of the first group about the visual stimuli displayed in the computer monitor and how to use the answering interface system when giving an answer during a given experiment trial. In the main experiment, participants were instructed to judge how the object is rotated based on the picture shown in the screen. The entire stimulus set consisted of pre-specified pictures of a box placed on the desk with thirteen different oriented angles. Each stimulus was repeated 6 times and, therefore, the total number of trials was 78 (=13 × 6). When participants make their answer, they were allowed to rotate the knob any way they liked, i.e. clockwise or counter-clockwise. They also could adjust the knob as much as they want until they thought the rotation was the same as they perceived. Once the final answer was submitted by pressing the space bar on the keyboard, the knob rotated back to its original home position. At the end of the main experiment, we collected each participant's task load by using a standard NASA TLX questionnaire to measure the feasibility of our approach. Test with haptic stimuli without visual stimuli (H) The apparatus and procedures were as in V condition, except that participants were given a haptic arm and visual stimuli were not provided. Before the main experiment, the experimenter described the haptic arm and let the participants experience the virtual haptic environments so that they could become accustomed to the point-based force display. Then, participants explored the screen through a mouse cursor, controlled by the force interface, with both haptic and visual information provided. The procedures, each of which showed a randomly oriented box, continued until the participants felt confident about the task.

3.2.4 Participants The number of participants who took part in the experiment for V, and H conditions were 12 (7 males), and 14 (8 males) and their mean age was 21.9, and 21.7, respectively. All of them were right-handed and reported that they had no known cutaneous or kinesthetic problems. They were university students and were paid for their participation.

3.2.5 Results and discussions The overall absolute errors measured were 5.5 degrees (stdev = 3.8) for the V condition and 8.2 degrees (stdev = 6.5) for the H condition. Mean completion time for each condition was 5.06 (stdev=3.32), and 12.59 (stdev=8.69) sec, respectively. One-way ANOVA test revealed that there were significant differences between the absolute errors of V and H (F(1,2104)=126.768, p<0.0001). Also, it was found that there were significant differences in absolute errors measured at each angle (F(12, 2093) =10.476, p<0.001). Figure 7 shows the response from all experimental conditions.

15 20 25 30 35 40 45 50 55 60 65 70 750

10

20

30

40

50

60

70

80

90

Presented angle (degree)

Res

pons

es (d

egre

e)

ReferenceVH

Figure 7. Responses with respect to rendering condition. Error bars represents the standard deviation.

As expected, the object's rotation was easily and reliably identifiable by sight. Although absolute errors measured from the H condition were greater than that of the V condition by 1.50 times, the data shows that the participants were able to estimate the orientation of the box on the whole. Note that the V condition is used for reference, not for direct comparison with H condition in terms of recognition performance in that haptics is a local sense. This result indirectly supports previous research that describes humans as generally proficient at identifying common objects only with manual exploration [15]. As can be seen in Figure 7, both haptic and the visual condition showed similar tendencies that participants perceive a lesser (bigger) orientation when the object is rotated less (more) than approximately 45 degrees. Presumably, this tendency resulted from the fact that participants set absolute reference locations, such as 15, 75, and 45 degrees, each of which is relatively easy to position and considering that this pattern was observed, not only in the haptic condition, but also in the visual condition (V), that might be associated with the nature of human information processing. Further investigation on this phenomenon is necessary in the future. The measured overall workload indexes were 22.9, and 38.3 out of 100 for the V and H condition. In conclusion, the proposed method can be used for haptically representing planar objects displayed in the 2D image. Note that our approach is based on the geometric orientation, not on the pixel-based image processing.

52

Page 5: Haptic Interaction with Objects in a Picture Based on ...INDEX T ERMS: H.5.2 [Information Interfaces and Presentation]: User Interfaces - Haptic I/O, Interaction styles, Graphical

4 APPLICATIONS Geometry-based interaction with images has received a considerable amount of attention in the field of HCI [9]. In this section, we explore the application space of geometry-based haptic interactions.

4.1 Haptic rendering of edge An edge is one of the most important features of objects when people are tactually identifying everyday objects [16]. Conventional approaches generally utilize edge-detection processes to render edges that are displayed in an image haptically. However, simulated haptic feelings from such methods are not as natural as real edges. This disparity comes mainly from the fact that haptic feedback is only conveyed on the edge part. Also, edges detected from images do not always describe the depth discontinuity in that edges from the image-processing step are also generated from discontinuities in the surface color and/or illumination. In contrast, our approach can simulate such important image features without a loss of generality. Figure 8 shows an example of a 2D picture with estimated surface normals displayed at the center of each rectangular wall. In this example, users can clearly feel the edges of each face. Note that all of the surface normals between adjacent walls are discontinuous.

Figure 8. Example of a 2D picture with estimated surface normal vectors displayed. Discontinuous normal vectors between neighboring faces result in discontinuous (or discreet) haptic experiences.

4.2 Scattering and Feeling Instead of utilizing manual picking or using image processing to detect feature points ( 2x ) in the image space, we employed a scattering method that projects a grid of points onto the object’s surface. In this setup, a picture is taken with a set of predefined patterns. This process allows real world objects to be modeled using projected mesh and to be rendered with resistive force feedback with the proposed method. In fact, this process is similar to a previous approach that casts a single laser point to the object to measure the distance to the objects [26] in that it models the world and physically renders the estimated world. In our approach, we utilized a four-point based projection method to understand the object’s geometric feature (i.e., surface normal) at a given point. Figure 9 shows an example of haptic interaction with a picture overlaid with a scattered array of points. In terms of implementation, we plan to use an IR-based projection method so that the patterns can be invisible to human eyes.

Figure 9. Physical interaction with objects using a scattered array of points. Instead of utilizing surface normals, we utilized vertex normals, displayed in magenta color, to approximate the surfaces with smoothly varying slopes. The interpolated normal vector (blue) is located at the mouse point.

To approximate the objects with finite number of patterns, we adopted a technique of normal vector interpolation. In fact, this approach is adopted by a variety of haptic applications for the shading force of 3D object rendering [2, 21]. This controlled variation in the direction of the force vector is intended to minimize the edge effect, which is caused by the discreet combination of faces. For the same reason, we adopted normal interpolation for the force shading of the faced object that appears in the image. Figure 10 shows an example of force shading on the 2D image.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8-1.2

-1

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

time(sec)

Forc

e(N

)

OriginalInterpolated

Figure 10. Example of interpolated forces while moving across the surface (left to right). Proposed force shading on the 2D space compensates discontinuities in force magnitude and direction.

5 CONCLUSION In this paper, we proposed a geometry-based haptic interaction scheme with 2D images. Our system allows users to interact physically with planar objects that are displayed in a picture. To do this, we first estimated the physical transformation between two spaces, i.e., the object space ( 3 ) and the image space ( 2 ). We then calculated a vector field that characterizes the surface geometry of interest. To evaluate the proposed haptic

53

Page 6: Haptic Interaction with Objects in a Picture Based on ...INDEX T ERMS: H.5.2 [Information Interfaces and Presentation]: User Interfaces - Haptic I/O, Interaction styles, Graphical

rendering method, we conducted a set of geometry tasks in which participants needed to feel and estimate the orientation displayed in the picture. On the whole, the data suggest that the participants could reliably estimate the geometric layout that appears in the picture by only touching the surface. We expect that the proposed haptic rendering approach can be utilized for haptically augmenting 2D images. As further work, we plan to apply the proposed rendering scheme to mobile interaction contexts so that it can be used more practically.

ACKNOWLEDGMENTS Authors thank Dr. Ivan Poupyrev, Dr. Ali Israr, and Dr. Takaaki Shitatori at Disney Research Pittsburgh for valuable discussion regarding geometry-based haptic rendering. This work was supported by the IT R&D program of MKE/KEIT, [2009-S-035-01, Contact-free Multipoint Realistic Interaction Technology Development]

REFERENCES [1] Bach-y-Rita, P., Collins, C. C., Saunders, F. A., White, B. and Scadden, L. Vision substitution by tactile image projection. Nature, 221 (1969), 963--964. [2] Basdogan, C., Ho, C. H. and Srinivasan, M. A ray-based haptic rendering technique for displaying shape and texture of 3D objects in virtual environments. In Proc. ASME Dynamic Systems and Control Division (1997), 77-84. [3] Bay, H., Tuytelaars, T. and Van Gool, L. Surf: Speeded up robust features. Computer Vision–ECCV 2006 (2006), 404-417. [4] Bradski, G. and Kaehler, A. Learning OpenCV: Computer vision with the OpenCV library. O'Reilly Media, Incorporated, 2008. [5] Cha, J., Eid, M. and Saddik, A. Touchable 3D video system. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP), 5, 4 (2009), 1-25. [6] Fritz, J. P. and Barner, K. E. Stochastic models for haptic texture. In Proc. In Proceedings of SPIE's International Symposium on Intelligent Systems and Advanced Manufacturing - Telemanipulator and telepresence technologies III (1996), 34-44. [7] Han, B.-K., Kim, S.-C., Lim, S.-C., Pyo, D. and Kwon, D.-S. Physical mobile interaction with kinesthetic feedback. In Proc. IEEE Haptics Symposium (2012), 571-575. [8] Hinckley, K., Sinclair, M., Hanson, E., Szeliski, R. and Conway, M. The videomouse: a camera-based multi-degree-of-freedom input device. In Proc. Proceedings of the 12th annual ACM symposium on User interface software and technology, ACM (1999), 103-112. [9] Horry, Y., Anjyo, K.-I. and Arai, K. Tour into the picture: using a spidery mesh interface to make animation from a single image. In Proc. SIGGRAPH 97 (1997), 225-232. [10] Israr, A., Bau, O., Kim, S.-C. and Poupyrev, I. Tactile feedback on flat surfaces for the visually impaired. In Proc. Proceedings of the 2012 ACM annual conference extended abstracts on Human Factors in Computing Systems Extended Abstracts, ACM (2012), 1571-1576. [11] Kim, S.-C., Kang, S.-C. and Kwon, D.-S. Sound Generation for the Haptic Perception using an Irregular Primitive Function. In Proc. Robot and Human interactive Communication, 2007. RO-MAN 2007. The 16th IEEE International Symposium on, IEEE (2007), 19-24. [12] Kim, S.-C., Kyung, K.-U. and Kwon, D.-S. Haptic Annotation for the Construction of an Interactive Image. In Proc. International

Conference on Ubiquitous Information Management and Communication, ACM (2011). [13] Kyung, K.-U., Kim, S.-C. and Kwon, D.-S. Texture Display Mouse: Vibrotactile Pattern and Roughness Display. IEEE/ASME Transactions on Mechatronics, 12, 3 (2007), 356-360. [14] Lederman, S. J. and Klatzky, R. L. Hand movements: a window into haptic object recognition. Cognitive psychology, 19, 3 (1987), 342-368. [15] Lederman, S. J. and Klatzky, R. L. Haptic identification of common objects: Effects of constraining the manual exploration process. Attention, Perception, & Psychophysics, 66, 4 (2004), 618-628. [16] Lederman, S. J. and Klatzky, R. L. Haptic perception: A tutorial. Attention, Perception, & Psychophysics, 71, 7 (2009), 1439-1459. [17] Lederman, S. J. and Klatzky, R. L. Sensing and displaying spatially distributed fingertip forces in haptic interfaces for teleoperator and virtual environment systems. Presence: Teleoperators & Virtual Environments, 8, 1 (1999), 86-103. [18] Minsky, M., Ming, O.-y., Steele, O., Frederick P. Brooks, J. and Behensky, M. Feeling and seeing: issues in force display. SIGGRAPH Comput. Graph., 24, 2 (1990), 235-241. [19] Peachey, D. Building procedural textures. Texturing and Modeling: A Procedural Approach, third ed. Morgan Kaufmann Publishers Inc (2003). [20] Robles-De-La-Torre, G. and Hayward, V. Force can overcome object geometry in the perception of shape through active touch. Nature, 412, 6845 (2001), 445-448. [21] Ruspini, D. C., Kolarov, K. and Khatib, O. The haptic display of complex graphical environments. In Proc. SIGGRAPH '97 (1997), 345-352. [22] Saga, S. and Deguchi, K. Lateral-Force-Based 2.5-Dimensional Tactile Display for Touch Screen. In Proc. IEEE Haptics Symposium (2012). [23] Shopf, J. and Olano, M. Procedural haptic texture. In Proc. Proceedings of the 19th annual ACM symposium on User Interface Software and Technology (2006), 179-185. [24] Willis, K. D. D., Poupyrev, I., Hudson, S. E. and Mahler, M. SideBySide: ad-hoc multi-user interaction with handheld projectors. In Proc. Proceedings of the 24th annual ACM symposium on User interface software and technology, ACM (2011), 431-440. [25] Yang, G.-H. and Kwon, D.-S. KAT II: Tactile Display Mouse for Providing Tactile and Thermal Feedback. Advanced Robotics, 22, 8 (2008), 851-865. [26] Yano, H., Miyamoto, Y. and Iwata, H. Haptic interface for perceiving remote object using a laser range finder. In Proc. Proceedings of the Third Joint EuroHaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (2009), 196-201.

54