An animation bilateral filter for slow-in and slow-out effects

10
An animation bilateral filter for slow-in and slow-out effects Ji-yong Kwon, In-Kwon Lee Dept. of Computer Science, Yonsei University, Shin-Chon Dong 134, Seoul 120-749, Republic of Korea article info Article history: Received 27 May 2010 Received in revised form 1 February 2011 Accepted 11 February 2011 Available online 23 February 2011 Keywords: Slow-in and slow-out Bilateral filter Cartoon animation Animation signal processing abstract In this paper, we introduce a method that endows a given animation signal with slow-in and slow-out effects by using a bilateral filter scheme. By modifying the equation of the bilateral filter, the method applies reparameterization to the original animation trajectory. This holds extreme poses in the original animation trajectory for a long time, in such a way that there is no distortion or loss of the original information in the animation path. Our method can successfully enhance the slow-in and slow-out effects for several different types of animation data: keyframe and hand-drawn trajectory animation, motion capture data, and physically-based animation by using a rigid body simulation system. Ó 2011 Elsevier Inc. All rights reserved. 1. Introduction The movements of an object in cartoon animation differ from realistic movements. Cartoon-style movements are expressive and exciting and attract audiences to cartoon animations. The animation principles [1,2] developed from the traditional cartoon animation allow the motion of a cartoon object to be both expressive and funny. Unfortu- nately, these principles represent artistic suggestions to an animator rather than computational methods. In fact, artists still prefer keyframe-based animation systems to methods that are mainly based on real-world observations, as the former system automatically generates realistic ani- mation data. Some researchers have studied methods that emulate the traditional animation techniques described in the ani- mation literature [1–3]. Chenney et al. [4] simulated the squash-and-stretch effect for simple rigid bodies by apply- ing non-uniform scaling to the body in accordance with its velocity, acceleration, and collision. Kim et al. [5] devel- oped a method that generates anticipation and follow- through effects by extrapolating changes in joint angles. Wang and his colleagues [6] proposed an innovative meth- od that generates anticipation and follow-through effects through the convolution of a Laplacian of Gaussians (LoG) kernel. They also produced effective squash-and-stretch ef- fects in 2D mesh animations by varying the time-shift term of an LoG filter. Kwon and Lee [7] proposed the construc- tion of a sub-joint hierarchy by subdividing the basic joint of a character and then used it to achieve rubber-like animation effects. The findings from these studies allow the generation of cartoon-style animation from realistic or unskillful hand-made animation data; however, they are usually concentrated on the spatial exaggeration of a motion, while the temporal exaggeration of cartoon animation is also important. Furthermore, in according to Terra and Metoyer [8],a novice user finds it difficult to specify the keyframe tim- ings rather than to set the spatial values of the keyframes. This paper introduces a simple and effective method for generating the slow-in and slow-out effect, which is a key animation principle relating to temporal exaggeration. In fact, most keyframe-based animation authoring systems have a function that controls the slow-in and slow-out ef- fect of the object’s animation by using a Bézier or cosine curve as the timing curve. Some researchers introduced a method for generating the slow-in and slow-out effect for characters’ motions based on keyframe extraction and a time-warping technique [9,10]. However, the previous methods are mostly based on the keyframe information, 1524-0703/$ - see front matter Ó 2011 Elsevier Inc. All rights reserved. doi:10.1016/j.gmod.2011.02.002 Corresponding author. E-mail addresses: [email protected] (J.-y. Kwon), iklee@yonsei. ac.kr (I.-K. Lee). Graphical Models 73 (2011) 141–150 Contents lists available at ScienceDirect Graphical Models journal homepage: www.elsevier.com/locate/gmod

Transcript of An animation bilateral filter for slow-in and slow-out effects

Graphical Models 73 (2011) 141–150

Contents lists available at ScienceDirect

Graphical Models

journal homepage: www.elsevier .com/locate /gmod

An animation bilateral filter for slow-in and slow-out effects

Ji-yong Kwon, In-Kwon Lee ⇑Dept. of Computer Science, Yonsei University, Shin-Chon Dong 134, Seoul 120-749, Republic of Korea

a r t i c l e i n f o

Article history:Received 27 May 2010Received in revised form 1 February 2011Accepted 11 February 2011Available online 23 February 2011

Keywords:Slow-in and slow-outBilateral filterCartoon animationAnimation signal processing

1524-0703/$ - see front matter � 2011 Elsevier Incdoi:10.1016/j.gmod.2011.02.002

⇑ Corresponding author.E-mail addresses: [email protected] (J.-y. K

ac.kr (I.-K. Lee).

a b s t r a c t

In this paper, we introduce a method that endows a given animation signal with slow-inand slow-out effects by using a bilateral filter scheme. By modifying the equation of thebilateral filter, the method applies reparameterization to the original animation trajectory.This holds extreme poses in the original animation trajectory for a long time, in such a waythat there is no distortion or loss of the original information in the animation path. Ourmethod can successfully enhance the slow-in and slow-out effects for several differenttypes of animation data: keyframe and hand-drawn trajectory animation, motion capturedata, and physically-based animation by using a rigid body simulation system.

� 2011 Elsevier Inc. All rights reserved.

1. Introduction

The movements of an object in cartoon animation differfrom realistic movements. Cartoon-style movements areexpressive and exciting and attract audiences to cartoonanimations. The animation principles [1,2] developed fromthe traditional cartoon animation allow the motion of acartoon object to be both expressive and funny. Unfortu-nately, these principles represent artistic suggestions toan animator rather than computational methods. In fact,artists still prefer keyframe-based animation systems tomethods that are mainly based on real-world observations,as the former system automatically generates realistic ani-mation data.

Some researchers have studied methods that emulatethe traditional animation techniques described in the ani-mation literature [1–3]. Chenney et al. [4] simulated thesquash-and-stretch effect for simple rigid bodies by apply-ing non-uniform scaling to the body in accordance with itsvelocity, acceleration, and collision. Kim et al. [5] devel-oped a method that generates anticipation and follow-through effects by extrapolating changes in joint angles.Wang and his colleagues [6] proposed an innovative meth-

. All rights reserved.

won), iklee@yonsei.

od that generates anticipation and follow-through effectsthrough the convolution of a Laplacian of Gaussians (LoG)kernel. They also produced effective squash-and-stretch ef-fects in 2D mesh animations by varying the time-shift termof an LoG filter. Kwon and Lee [7] proposed the construc-tion of a sub-joint hierarchy by subdividing the basic jointof a character and then used it to achieve rubber-likeanimation effects. The findings from these studies allowthe generation of cartoon-style animation from realisticor unskillful hand-made animation data; however, theyare usually concentrated on the spatial exaggeration of amotion, while the temporal exaggeration of cartoonanimation is also important.

Furthermore, in according to Terra and Metoyer [8], anovice user finds it difficult to specify the keyframe tim-ings rather than to set the spatial values of the keyframes.

This paper introduces a simple and effective method forgenerating the slow-in and slow-out effect, which is a keyanimation principle relating to temporal exaggeration. Infact, most keyframe-based animation authoring systemshave a function that controls the slow-in and slow-out ef-fect of the object’s animation by using a Bézier or cosinecurve as the timing curve. Some researchers introduced amethod for generating the slow-in and slow-out effectfor characters’ motions based on keyframe extraction anda time-warping technique [9,10]. However, the previousmethods are mostly based on the keyframe information,

142 J.-y. Kwon, I.-K. Lee / Graphical Models 73 (2011) 141–150

and we cannot use those methods for animation data thatdoes not have keyframe information or where it is difficultto find. For example, animators often use a rigid body sim-ulation system to generate realistic animations of multipleobjects that may be very difficult to create manually; how-ever, the slow-in and slow-out cartoon stylization for thisanimation would be difficult if we use previous methods.

On the other hand, our method does not require thekeyframe information for given animation data. It is a sim-ple and fast method based on kernel convolution. Similarto the method proposed by Wang et al. [6], our methodcan be applied to wide varieties of animation data, includ-ing 2D animation, rigid body animation, and character ani-mation. The key idea of our method is to exploit thescheme of a bilateral filter [11], using it as a tool for repa-rameterization of the animation trajectory.

This paper is organized as follows. We briefly reviewprevious work in Section 2, and present our animationbilateral filter in Section 3. In Section 4, we introduce someapplications for our method and discuss the experimentalresults of each application. Comparison between the styl-ized animation using our filter and that produced by theprevious method is discussed in Section 5. Then, we drawconclusions in the last section.

2. Related work

Several researchers have studied the signal processingtechniques for animation data to generate exaggerated orattenuated animation. Unuma et al. [12] addressed thisproblem using relatively simple interpolation and extrapo-lation techniques for motion data. Bruderlin and Williams[13] introduced a signal processing technique in whichmotion data is split into several frequency bands, that arethen modified and used to resynthesize an exaggeratedmotion. Lee and Shin [14,15] developed stable methodsfor processing rotational motions in a similar fashion usingquaternions. Similarly, our method can also be classified asan animation signal processing technique.

Several methods use variations in timing to synchronizea set of motion data or to stylize motion data. Witkin andPopovic [16] proposed a motion warping technique forediting animation data based on time-warping of the mo-tion data. Wang et al. [6] produced effective squash-and-stretch effects in 2D mesh animations by varying thetime-shift term of an LoG filter. White et al. [9] developeda slow-in and slow-out filter for character motion that isbased on a time-warping technique. Tateno et al. [10] useda similar technique to generate stylized motions. Kass and

Fig. 1. Example of a 1D signal proc

Anderson [17] argue that the self-overlapping effect thatoccurs when a character is squashed can be modeledmathematically by varying the time phase of the ‘‘wigglespline’’ they introduced. Coleman et al. [18] stylizedskeletal animation using staggered poses keyed into a setof different timings for one pose. We also utilize thetime-warping function for the animation signal to enhancethe slow-in and slow-out effect for given animation data.

3. Animation bilateral filter

The bilateral filter introduced by Tomasi and Manduchi[11], is a non-linear filter widely used in image processing.The general bilateral filter used for image processing con-sists of two weight terms: one is a spatial weight termand the other is an intensity weight term that helps theedge preservation of an input image. Assuming that I(x)is the pixel intensity value at position x of image I, wecan compute the bilateral-filtered value I

0(x) as follows:

I0ðxÞ ¼P

y2NðxÞGrs ðx� yÞGrv ðIðxÞ � IðyÞÞIðyÞPy2NðxÞGrs ðx� yÞGrv ðIðxÞ � IðyÞÞ ; ð1Þ

where N(x) is a set of positions of neighborhood pixelsaround x, and Gr(x) is the Gaussian distribution functionwhose standard deviation is r. The bilateral filter simulta-neously considers the spatial weight Grs ðx� yÞ, which givesa low weight value for a neighboring pixel far from thecenter pixel, and the intensity weight term Grv ðIðxÞ � IðyÞÞ,that generates a low weight value for a neighboring pixelvalue that differs from the center pixel’s value; therefore,an edge-preserved, smoothed image is generated. Fig. 1 pro-vides an example of a 1D signal processed using a bilateralfilter.

As observed from Eq. (1), the key factor of a bilateral fil-ter in image processing is the multiplication of the inten-sity weight term. In other words, the feature-relatedweight term of the image bilateral filter is formulatedusing the difference in pixel intensity. As a result, the fil-tered signal values in the smooth edge tend to move to-ward the center of planar regions (see the right side ofFig. 1). This effect is closely related to the slow-in andslow-out rule of the animation principles we aim to formu-late. The key rule of slow-in and slow-out is achieved byshowing the keyframes over a relatively longer period thanthe inbetween frames [2]. Assume that the 1D signal inFig. 1 is the animation signal, where the x axis representsthe time domain. The object moves from the low positionto the high position. In the filtered signal, the values at

essed using a bilateral filter.

Fig. 2. Example of weight values resulting from the arc-length based spatial weight function f(p(t), p(u)). (a) This is the case of the trajectory with uniformvelocity and (b) this is the case of the trajectory with variable velocity.

1 For interpretation of color in Figs. 1–12, the reader is referred to theweb version of this article.

J.-y. Kwon, I.-K. Lee / Graphical Models 73 (2011) 141–150 143

the inbetween frames (middle of the graph) are shifted tothe values at the keyframes (both sides of the graph);therefore, we can expect a slow-in and slow-out effect.

Let p(t) be the position at time t 2 [0, T] of an arbitraryobject. We can treat this as a spatial trajectory curve withtime parameter t. Then, the stylized trajectory p̂ðtÞ that re-sults from the bilateral filter can be computed as follows:

p̂ðtÞ ¼P

u2½t�h;tþh�Grt ðt � uÞf ðpðtÞ;pðuÞÞpðuÞPu2½t�h;tþh�Grt ðt � uÞf ðpðtÞ;pðuÞÞ ; ð2Þ

where h is the kernel size of the bilateral filter andGrt ðt � uÞ is the temporal weight term that decays theweight far from the current time t, and f(p(t), p(u)) is thefeature weight term which would give a slow-in andslow-out effect to animation. Grt ðt � uÞ and f(p(t), p(u))are the analogies of Grs ðx� yÞ and Grv ðIðxÞ � IðyÞÞ inEq. (1), respectively.

To let Eq. (2) be able to generate the slow-in and slow-out effects, we have to select an appropriate feature weightfunction f(p(t), p(u)). The most important principle thatf(p(t), p(u)) should imply is to give a large weight to theneighborhood which moves slow.

We tested a lot of variations of the feature weight func-tion, and developed the arc-length based spatial weightfunction, which is defined as follows:

f ðpðtÞ;pðuÞÞ ¼ 1� jPðtÞ � PðuÞjmaxðPðt þ hÞ � PðtÞ; PðtÞ � Pðt � hÞÞ ;

ð3Þ

where PðtÞ ¼R t

0 jdpðvÞ

dv jdv is the arc-length of the trajectoryfrom 0 to t. The denominator max(P(t + h) � P(t),P(t) � P(t � h)) is the normalization term that lets theresulting weight be at the range [0, 1]. In detail, f(p(t), p(u))gives a high weight to a neighbor point p(u) close to thecenter point p(t), and gives a low weight to a point far fromp(t).

Fig. 2 explains how f(p(t), p(u)) leads to slow-in andslow-out effects. Assume that the trajectory in Fig. 2a hasa uniform velocity. We can map the trajectory point tothe arc-length parameterized axis shown as the right graphof Fig. 2a. For a trajectory point p(t) which is marked as agreen1 circle, the weight resulting from f(p(t), p(u)) uni-formly decreases from the center point; therefore, theweighted sum of the points would be located near thep(t). On the other hand, when the velocity of the trajectorydecreases similar to Fig. 2b, f(p(t), p(u)) gives a higherweight to the points of the right side than those of the leftside. Thus, the weighted sum of the points would slightlymove to the points in the right side, which have a slowvelocity rather than those in the left side. In other words,we can expect the slow-in and slow-out effects by lettingf(p(t), p(u)) give a relatively high weight for a slowmovement.

Fig. 3b shows the filtered trajectory resulting from Eq.(2). We can observe that the trajectory processed by thebilateral filter using the curve point loses the detail ofthe original trajectory curve. Because the position-basedbilateral filter generates the point by using the weightedsum of the points in the kernel, the shape of the trajec-tory processed by the filter differs from that of the inputtrajectory. Of course we can endow the new trajectorywith the slow-in and slow-out effect; however, doing sowould result in the loss of information on the originaltrajectory.

This problem can be solved by reformulating Eq. (2)based on the parameters of the original trajectory curve.In other words, we can avoid the detail loss by exploitingthe bilateral filter scheme as a tool of curve reparameteri-zation. We first divide the input curve into the curve and

Fig. 3. Applying the bilateral filter to a spatial trajectory curve. (a) The original trajectory, (b) the trajectory resulting from the position-based bilateral filter,and (c) the trajectory resulting from the parameter-based bilateral filter.

144 J.-y. Kwon, I.-K. Lee / Graphical Models 73 (2011) 141–150

its 1D parameter curve, and then replace the input andoutput point of Eq. (2) with the parameter as follows:

p̂ðtÞ ¼ pðsðtÞÞ;

sðtÞ ¼P

u2½t�h;tþh�Grt ðt � uÞf ðpðtÞ;pðuÞÞuPu2½t�h;tþh�Grt ðt � uÞf ðpðtÞ;pðuÞÞ : ð4Þ

We can consider the reparameterization function s(t) as atime-warping function. Fig. 3c shows the trajectory result-ing from the parameter-based bilateral filter described byEq. (4). As shown in the figure, the shape of the filtered tra-jectory does not change, while the curve points with fastmovements are slightly moved to points with slow move-ments. Therefore, we can endow an arbitrary spatial trajec-tory with the slow-in and slow-out effects by using thesimple variation of the bilateral filter technique.

3.1. Controlling the animation bilateral filter

While the bilateral filter in image processing has twocontrol parameters that decide the spatial kernel size andthe threshold of the intensity difference, our animationbilateral filter has one parameter (i.e., kernel size h) becausewe use a normalized arc-length-based weight value. Unfor-tunately, there is no intuitive relationship between thekernel size and the magnitude of the temporal exaggeration.

To enhance the controllability of the animation bilateralfilter, we decompose the reparameterization function s(t)in Eq. (4) into the identity function t and the time-shiftfunction w(t) as below:

sðtÞ ¼ t þP

u2½�h;h�Grt ðuÞf ðpðtÞ;pðt þ uÞÞuPu2½�h;h�Grt ðuÞf ðpðtÞ;pðt þ uÞÞ ;¼ t þwðtÞ: ð5Þ

We can expect the resulting trajectory to be greatly exag-gerated if we use large time-shift values. Therefore, weset the exaggeration magnitude parameter a P 0 to thescaling term of w(t):

sðtÞ ¼ t þ awðtÞ: ð6Þ

Because s(t) can be considered as a time-warping func-tion, it should strictly satisfy the monotonic increasingcondition in order to avoid a side effect (e.g. a rewindinganimation). Thus, using an excessive a can cause such a

side effect. We can avoid this by computing the maximummagnitude amax. The monotonic increasing condition of s(t)can be described as follows:

sðt þ 1Þ � sðtÞP 0; t 2 ½0; T � 1�:

Substituting t + aw(t) for s(t), we can get:

t þ 1þ awðt þ 1Þ � t � awðtÞ P 0;aðwðt þ 1Þ �wðtÞÞP �1:

Thus, we can find amax as below:

amax ¼mint2½0;T�1� max 0;1

wðtÞ �wðt þ 1Þ

� �� �: ð7Þ

By restricting a 2 [0, amax], the time-warping functionexaggerated by the filter can always satisfy the monotonicincreasing condition.

We experimentally checked the effect of the kernel sizeand the exaggeration magnitude for several examples.Fig. 4 shows one example from our experiment. The origi-nal trajectory is generated by a commercial keyframe-based animation system and was sampled at 30 framesper second. The trajectory has low-frequency parts at thestart of the curve (i.e. ‘‘g’’ character), and has high-frequency parts at the end of the curve (i.e. ‘‘m’’ character).The half kernel size h means that the filter includes theneighboring frames [t � h,t + h] for convolution computa-tion. rt is automatically determined by a simple equation.In our experiments in this paper, we used rt = 0.3h + 0.8.The normalized exaggeration magnitude a/amax 2 [0, 1] isused for the experiment. The red dots are the time stampsthat represent the position of the object at each frametime.

As shown in Fig. 4, we observed that the large exagger-ation magnitude parameter made the results stronger andstiffer, while the results with different kernel sizes ap-peared similar. However, we confirmed that the filteringwith kernels that are too small or too large cannot generatedesirable results for low-frequency or high-frequencymotions. For example, the timings in the green circles ofFig. 4 are apparently slower than their neighborhoodpoints; however, the filter fails to generate relatively slowmovements, even though the magnitude parameter ismaximum. In our further experiments, we used a

Fig. 4. Example with different kernel sizes and magnitudes.

J.-y. Kwon, I.-K. Lee / Graphical Models 73 (2011) 141–150 145

medium-sized kernel (h = 20) with an appropriate exag-geration magnitude. Alternatively, we might be able toadopt the variable kernel size strategy, which is similarto the method proposed by Wang and his colleagues [6].

4. Applications

We can apply the bilateral filter described in the previ-ous section to a variety of animation signals. In this section,we enumerate some examples of the application of ouranimation bilateral filter and discuss the experimental re-sults from each application. Since our results are mostlybased on the retiming of the original animation, it can behard to find the differences between the original and ourresults using only screen shots. Therefore, we recommendwatching the associated video material in order to fullyappreciate the difference. All of the animation data in thispaper has been sampled at 30 frames per second.

4.1. Keyframe and hand-drawn trajectory

We examined our filter for a simple case: 2D keyframeanimation. Fig. 5 shows an example of keyframe animationdata. The trajectories in Fig. 5a are generated by Béziercurve interpolation, which is generally provided by thecommercial animation authoring tool. The result of ourmethod for the original trajectory curve is presented inFigs. 5b–d with different exaggeration magnitudes. As wecan see, our filter can successfully generate results thatare temporally exaggerated for a given keyframe anima-tion trajectory with various exaggeration magnitudes.

Recently, the animation generated using a hand-drawntrajectory has been popular and widely studied becausethese methods are intuitive and easy for a novice user touse. We tested our filter on a 2D computer animation ona hand-drawn trajectory. Fig. 6 shows a bouncing ball ani-mation made by the hand-drawn trajectory and the resultproduced by our filter. We can prove that the time periodspent at the extreme points increases when our approachis used. The slow-in and slow-out effect makes the anima-tion data more expressive and exaggerated than the origi-nal hand-drawn trajectory.

4.2. Motion capture data

Motion capture data usually consists of a spatial trajec-tory of a root joint and a set of joint rotation sequences,and the position of each joint can be easily computed usingforward kinematics. We can apply our filter to the motioncapture data by considering these position data of joints tobe a high-dimensional animation signal. Assuming thatpi(t) is the position of the ith joint at time t, the pose dataat the time can be represented as a high-dimensional vectorp(t) = [p1(t), p2(t), . . . , pN(t)]T, where N is the number ofjoints of the character. Fig. 7a shows the original motioncapture data and Fig. 7b shows the comparison betweenthe original and the exaggerated motion produced by our fil-ter. We can observe that the filtered motion (i.e., green char-acter) shows in-air poses that have relatively slow velocitiesover a long period, making them look more expressive andexciting than the original motion capture data.

Fig. 6. Applying the bilateral filter to a 2D hand-drawn animation trajectory. The red dots represent a time stamp for every frame. (a) Original trajectory and(b) the trajectory resulting from our filter (a/amax = 0.5).

Fig. 5. Applying the bilateral filter to a keyframe-based animation trajectory. The red dot represents the time stamp for every frame, and the blue dots arethe keyframes. (a) Original trajectory produced by Bézier interpolation, (b) the filtered trajectory with a/amax = 0.3, (c) a/amax = 0.6, and (d) a/amax = 1.0.

146 J.-y. Kwon, I.-K. Lee / Graphical Models 73 (2011) 141–150

The interesting application of our filter for use in thestylization of the motion capture data is to apply it to thespatial trajectory pi(t) for each joint in a character individ-ually. According to Coleman et al. [18], the movement ofdifferent limbs in expressive and believable animation isnot exactly aligned. Even though two joints are connectedby a link, their timing at the start and end of an action isslightly different. Due to this slight difference timing andthe effect of our filter that the joint reaches extreme pointsearlier and arrives at them later than the original, theapplication of our filter to the spatial trajectory of eachjoint results in a motion that is akin to ‘‘stretching limbs’’and this makes the character funny and flexible.

The resulting stylized motion described above is a set ofjoint trajectories, thus we should convert it into a set ofrotational data. We utilize the IK method [19] to recon-struct the animation data from the set of joint trajectories.The difference between our conversion and the traditionalIK method is that we allow links to stretch. We add anadditional animation channel to each joint in order to scalethe joint link. Fig. 7c shows the comparison between theoriginal and the exaggerated motion processed by our filterfor each joint. As we can see, the movement of the bluecharacter’s right hand quickly arrives to extreme positions,while the left leg slightly stretches due to the slow start ofthe left foot. Therefore we can expect that the exaggeratedmotion looks more expressive due to the slow-in and slow-out effect, and looks funny and flexible due to the stretch-ing of limbs.

4.3. Rigid body simulation

The rigid body simulation system allows the user togenerate a sequence of realistic animation using a large

amount of rigid bodies, which is difficult to create manu-ally. Although this system can be used for generating phys-ically realistic motions of rigid bodies, one may want tostylize their motions in order to make them suitable inthe cartoon animation. To stylize this animation in orderto use it in cartoon animation, we should develop a stylizedsimulation system like [4] or convert it into keyframe ani-mation data to control the animation timing. We can easilystylize this animation using our method, because it doesnot require keyframe information or implementation ofthe stylized simulation system. We first apply our filterto the spatial trajectory of a simple rigid object. Fig. 8bshows a simple example of a falling box. We can observethat the slow-in and slow-out effects are generated aroundthe extreme positions of the box (e.g. maximum points).

If the collision information during the simulation isknown, we can add the squash-and-stretch effect to theanimation to allow us to generate more dramatic resultsby combining the temporal exaggeration with the spatialexaggeration. The detail algorithm used is describedbelow:

� Stretching is performed for every frame except for thecolliding frame. The stretching length l and axis a aredefined by computing the difference d between the origi-nal position p(t) and its exaggerated position p(s(t)) pro-duced by our filter: d = p(s(t)) � p(t), l = —d—, a = d/l.Let lo be the length of the object on axis a. The scalingtransformation is then applied to the object, where thescaling value is computed by c = (l + lo)/lo. Note that thisscaling transformation should be non-uniform to pre-serve the volume of the object. In other words, the scalingvalue on the axis a is c, and the scaling value on the twodirections perpendicular to a is 1=

ffiffifficp

.

Fig. 7. Example using motion capture data. (a) The original motion data, (b) the exaggerated motion considering the motion as a high-dimensional curve(a/amax = 0.66), (c) the exaggerated motion produced by applying our filter to each joint (a/amax = 0.8). The exaggerated characters (i.e. green and bluecharacters in (b) and (c)) are overlapped with the original for precise comparison.

Fig. 8. Example of falling box animation produced by rigid body simulation. (a) The original animation data, (b) the exaggerated motion produced by ourfilter, and (c) the non-uniform scaling when the squash-and-stretch effects are added to the filtered object in (b).

J.-y. Kwon, I.-K. Lee / Graphical Models 73 (2011) 141–150 147

� Squashing is only performed for frames when the objectcollides. The squashing axis a is equal to the normaldirection of the collision, and the scaling value isdefined by lo/(lo + qmax(v � a, 0)), where v is the velocityof the object and q is a user-defined parameter for con-trolling the amount of squashing exaggeration. After

non-uniform scaling transformation is applied to theobject, additional translation is performed in order tomaintain the contact point.

We first apply the squash-and-stretch algorithmto the transformation sequence of the object before

148 J.-y. Kwon, I.-K. Lee / Graphical Models 73 (2011) 141–150

time-warping occurs, using the reparameterization func-tion s(t). Fig. 8c illustrates the result of using the methoddescribed above. By combining spatial exaggeration—usingthe non-uniform scaling—with temporal exaggeration, theexaggerated object looks more exciting and cartoon-style.

If the object collides with multiple objects, we cannotdirectly apply the squashing algorithm to the object. Tosolve this problem, we use the weighted sum of transfor-mation matrices based on the radial basis function. Let Ebe a set of the collision event e for the object. First, thesquashing transformation matrix Te is computed for eachcollision event e. Then, we calculate the weight xe of eachvertex x in the object for every squashing transformationmatrix as follows:

xe ¼ 1=jx� pej2; ð8Þ

where pe is the contact position for collision event e. Final-ly, the deformed vertex x

0in the squashed object is com-

puted by applying the weighted sum of Te to the originalvertex:

x0 ¼P

e2ExeTePe2Exe

x: ð9Þ

Fig. 9 illustrates a sphere simultaneously colliding withtwo other objects, a plane and a box. We can observe howthe sphere squashes appropriately under multiple colli-sions as a result of the combination of two differentsquashing transformations. In [4], multiple collisions of arigid object are solved by ordering them and then by pro-cessing the collisions sequentially. However, the authorsof the study noted limitations when their method was usedfor the large-scale simulation of multiple objects. It wouldappear that our system, using the method described above,is capable of handling an animation sequence with multi-ple objects. Fig. 10 shows an example with multiple rigidboxes.

One of the limitations of our squash-and-stretch meth-od is that the resulting object might not preserve its vol-ume and could be largely distorted, since our squashingalgorithm is based on the weighted sum of the transforma-tion matrix. Therefore, our system can generate undesir-

Fig. 9. Example of a falling sphere with multiple collisions. (a) The original animstretch method.

able squash-and-stretch effects for a complex animationscene with a large number of rigid objects.

5. Comparison

We compared our results with the exaggerated anima-tion using the previous methods. Several previous methodscan be utilized for stylization of the animation signal. Inthis paper, the exaggerated animation produced by the car-toon animation filter [6] is compared with our results sincethis method is not only simple and effective, but is alsobased on the convolution of the signal. Several examplesof the keyframe animation and motion capture data arecartoon-stylized using our method and cartoon animationfilter for a fair comparison. Because the rigid body anima-tion exaggerated by the cartoon animation filter cannotpreserve information of the simulation such as a collisionpoint, it is omitted in the comparison.

Fig. 11 shows the differences between the exaggeratedanimation trajectory produced by our filter and that gener-ated by the cartoon animation filter. While the spacing ofeach frame is changed from the original animation trajec-tory to our result (Fig. 11b), the cartoon animation filtertends to exaggerate the spatial shape of the trajectory,especially at the corner (Fig. 11c). Of course, we can com-bine the effects of these two filters by sequentially apply-ing them to the animation trajectory Fig. 11d shows theresult produced by such a sequential process. We canobserve that this result is exaggerated spatially andtemporally.

Fig. 12 compares the exaggerated results for a jumpingmotion sequence. The red character is the original motion,the green character is the exaggerated motion produced byour method, the blue character is the stylized motiongenerated using the cartoon animation filter, and the blackcharacter is the stylized motion using both of our methodand the cartoon animation filter. We can observe that ourfilter lets the joints move slowly for a moment, so thatthe jumping motion is temporally exaggerated and haspartially squash-and-stretched bodies. On the other hand,the motion generated by the cartoon animation filterhas wildly swinging limbs; therefore, it is spatially

ation data and (b) the exaggerated motion produced by our squash-and-

Fig. 10. Example of falling boxes with multiple collisions. (a) The original animation data and (b) the exaggerated animation produced by oursquash-and-stretch method.

Fig. 11. Comparison of the exaggerated results of the keyframe animation. (a) The original animation trajectory, (b) the exaggerated motion produced byour filter (a/amax = 0.8), (c) the exaggerated motion produced by the cartoon animation filter, and (d) the exaggerated motion produced by applying thecartoon animation filter and our filter sequentially.

Fig. 12. Comparison of the exaggerated results of motion capture data. (a) The original motion data, (b) the exaggerated motion produced by applying ourfilter to each joint (a/amax = 0.8), (c) the exaggerated motion produced by the cartoon animation filter, and (d) the exaggerated motion produced by applyingthe cartoon animation filter and our filter sequentially.

J.-y. Kwon, I.-K. Lee / Graphical Models 73 (2011) 141–150 149

150 J.-y. Kwon, I.-K. Lee / Graphical Models 73 (2011) 141–150

exaggerated. Finally, we can see that both spatial and tem-poral exaggeration are presented in the motion stylized bysequentially applying these two filters. From this observa-tion, we argue that the effect produced by our filter differsfrom the spatial exaggeration generated by the cartoonanimation filter, so that it can be separated from the effectof the cartoon animation filter. Furthermore, we expectthat more variant stylization of the motion can be achievedby combining these two filters.

6. Conclusion

In this paper, we formulated a bilateral filter for anima-tion data as a curve reparameterization in order to enhancethe slow-in and slow-out effects of the original animationdata. Our method is simple and easy to implement. Be-cause the computational cost during the filter processingis small, our method can be executed in real time. Further-more, our method can be applied to wide varieties of ani-mation data, including hand-drawn animation, motioncapture data, and physically-based rigid body animation.

Our animation bilateral filter concentrates on the tem-poral exaggeration of the original animation data accord-ing to the slow-in and slow-out rule. To generate moredramatic cartoon-style animation, we recommend usingspatial exaggeration methods such as the cartoon anima-tion filter proposed by Wang et al. [6]. While our methodcan generate cartoon-style timing of the animation, thecartoon-style deformation of the object’s shape and trajec-tory are apparently important for producing good results.

Acknowledgments

This work was supported by the IT R& D program of MKE/MCST/IITA. [2008-F-031-01, Development of ComputationalPhotography Technologies for Image and Video Contents].

Appendix A. Supplementary data

Supplementary data associated with this article can befound, in the online version, at doi:10.1016/j.gmod.2011.02.002.

References

[1] F. Thomas, I. Johnston, The Illusion of Life: Disney animation, WaltDisney Productions, 1981.

[2] J. Lasseter, Principles of traditional animation applied to 3dcomputer animation, in: Proceedings of ACM SIGGRAPH ’87, 1987,pp. 35–44. doi:10.1145/37401.37407.

[3] R. Williams, The Animator’s Survival Kit: A Manual of Methods,Principles and Formulas, Faber and Faber, 2001.

[4] S. Chenney, M. Pingel, R. Iverson, M. Szymanski, Simulating cartoonstyle animation, in: Proceedings of the 2nd International Symposiumon Non-photorealistic Animation and Rendering, ACM Press, 2002,pp. 133–138. doi:10.1145/508530.508553.

[5] J.-H. Kim, J.-J. Choi, H.J. Shin, I.-K. Lee, Anticipation effect generationfor character animation, in: Proceedings of the Computer GraphicsInternational Conference, 2006, pp. 639–646.

[6] J. Wang, S.M. Drucker, M. Agrawala, M.F. Cohen, The cartoonanimation filter, in: Proceedings of ACM SIGGRAPH ’06, 2006, pp.1169–1173. doi:10.1145/1179352.1142010.

[7] J.-Y. Kwon, I.-K. Lee, Exaggerating character motions using sub-jointhierarchy, Computer Graphics Forum 27 (6) (2008) 1677–1686.

[8] S.C.L. Terra, R.A. Metoyer, A performance-based technique for timingkeyframe animations, Graphical Models 69 (2) (2007) 89–105,doi:10.1016/j.gmod.2006.09.002.

[9] D. White, K. Loken, M. van de Panne, Slow in and slow out cartoonanimation filter, in: SIGGRAPH ’06 Poster, 2006.

[10] K. Tateno, W. Xin, S. Obayashi, K. Kondo, T. Konma, Motionstylization using a timing control method, in: SIGGRAPH ’06Poster, 2006.

[11] C. Tomasi, R. Manduchi, Bilateral filtering for gray and color images,in: ICCV ’98: Proceedings of the Sixth International Conference onComputer Vision, IEEE Computer Society, Washington, DC, USA,1998, p. 839.

[12] M. Unuma, K. Anjyo, R. Takeuchi, Fourier principles for emotion-based human figure animation, in: Proceedings of ACM SIGGRAPH’95, ACM Press, 1995, pp. 91–96. doi:10.1145/218380.218419.

[13] A. Bruderlin, L. Williams, Motion signal processing, in: Proceedings ofACM SIGGRAPH’95, 1995, pp. 97–104. doi:10.1145/218380.218421.

[14] J. Lee, S.Y. Shin, A coordinate-invariant approach to multiresolutionmotion analysis, Graphical Models 63 (2) (2001) 87–105, doi:10.1006/gmod.2001.0548.

[15] J. Lee, J. Chai, P.S.A. Reitsma, J.K. Hodgins, N.S. Pollard, Interactivecontrol of avatars animated with human motion data, in:Proceedings of ACM SIGGRAPH ’02, ACM Press, 2002, pp. 491–500.doi:10.1145/566570.566607.

[16] A. Witkin, Z. Popovic, Motion warping, in: Proceedings of ACMSIGGRAPH ’95, ACM, New York, NY, USA, 1995, pp. 105–108.doi:10.1145/218380.218422.

[17] M. Kass, J. Anderson, Animating oscillatory motion with overlap:wiggly splines, in: ACM SIGGRAPH 2008 papers, New York, NY, USA,2008, pp. 1–8. doi:10.1145/1399504.1360627.

[18] P. Coleman, J. Bibliowicz, K. Singh, M. Gleicher, Staggered poses: acharacter motion representation for detail-preserving editing ofpose and coordinated timing, in: Proceedings of the Eurographics/ACM SIGGRAPH Symposium on Computer Animation ’08, 2008.

[19] C. Hecker, B. Raabe, R.W. Enslow, J. DeWeese, J. Maynard, K. vanProoijen, Real-time motion retargeting to highly varied user-createdmorphologies, in: Proceedings of ACM SIGGRAPH ’08, 2008.