Determination of camera parameters for character motions using motion area

9
Visual Comput (2008) 24: 475–483 DOI 10.1007/s00371-008-0228-x ORIGINAL ARTICLE Ji-Yong Kwon In-Kwon Lee Determination of camera parameters for character motions using motion area Published online: 17 May 2008 © Springer-Verlag 2008 J.-Y. Kwon · I.-K. Lee () Yonsei University, Seoul, Korea [email protected], [email protected] Abstract We propose a method to determine camera parameters for character motion, which considers the motion by itself. The basic idea is to approximately compute the area swept by the motion of the character’s links that are orthogonally projected onto the image plane, which we call motion area”. Using the motion area, we can determine good fixed camera parameters and camera paths for a given character motion in the off-line or real-time camera control. In our experimental results, we demonstrate that our camera path generation algorithms can compute a smooth moving camera path while the camera effectively displays the dynamic features of character motion. Our methods can be easily used in combination with the method for generating occlusion-free camera paths. We expect that our methods can also be utilized by the general camera planning method as one of heuristics for measuring the visual quality of the scenes that include dynamically moving characters. Keywords Camera planning · Motion exploration · Motion area 1 Introduction Controlling a camera in 3D environments is a very im- portant problem because every 3D graphical application has to decide how to display 3D objects in the scene on the screen. Therefore, many researchers have developed novel methods for controlling a camera. Some of these methods enable users to directly manipulate the camera. Others are developed to automatically determine camera parameters [10]. Various methods have been developed for automated virtual camera planning [5] and measuring the visual qual- ity [16, 19, 20] for a given scene. However, to the best of our knowledge, there is no method that considers character motions in a scene as the previous camera control methods are used for general 3D scenes. This work was supported by the IT R&D program of MIC/IITA [Devel- opment of Computational Photography Technologies for Image and Video Contents]. The previous methods deal with user-defined con- straints or cinematographic techniques with the semantic meaning of a scene rather than the character motion itself. However, we have to consider character motions in a scene as it is obvious that character motions can be shown in diverse ways depending on the viewing direction. Furthermore, sometimes it is necessary to see a charac- ter motion without considering a scenario such as a char- acter viewer in 3D games and a scene exploration for character motion. For these reasons, our research is mo- tivated by the following question: Is there a quantitative measure for selecting good camera parameters by consid- ering only the character motion? In this paper, we propose a quantitative and intuitive measure for determining parameters for camera control that considers only the character motion. Our basic idea is to approximately compute the swept area generated by the motion of character’s links that are orthogonally projected into the image plane, which we call “Motion Area”. The larger the motion area value, the greater is the dynamic

Transcript of Determination of camera parameters for character motions using motion area

Page 1: Determination of camera parameters for character motions using motion area

Visual Comput (2008) 24: 475–483DOI 10.1007/s00371-008-0228-x O R I G I N A L A R T I C L E

Ji-Yong KwonIn-Kwon Lee

Determination of camera parametersfor character motions using motion area∗

Published online: 17 May 2008© Springer-Verlag 2008

J.-Y. Kwon · I.-K. Lee ()Yonsei University, Seoul, [email protected],[email protected]

Abstract We propose a method todetermine camera parameters forcharacter motion, which considersthe motion by itself. The basic ideais to approximately compute the areaswept by the motion of the character’slinks that are orthogonally projectedonto the image plane, which we call“motion area”. Using the motionarea, we can determine good fixedcamera parameters and camera pathsfor a given character motion in theoff-line or real-time camera control.In our experimental results, wedemonstrate that our camera pathgeneration algorithms can compute

a smooth moving camera path whilethe camera effectively displays thedynamic features of character motion.Our methods can be easily used incombination with the method forgenerating occlusion-free camerapaths. We expect that our methodscan also be utilized by the generalcamera planning method as one ofheuristics for measuring the visualquality of the scenes that includedynamically moving characters.

Keywords Camera planning ·Motion exploration · Motion area

1 IntroductionControlling a camera in 3D environments is a very im-portant problem because every 3D graphical applicationhas to decide how to display 3D objects in the scene onthe screen. Therefore, many researchers have developednovel methods for controlling a camera. Some of thesemethods enable users to directly manipulate the camera.Others are developed to automatically determine cameraparameters [10].

Various methods have been developed for automatedvirtual camera planning [5] and measuring the visual qual-ity [16, 19, 20] for a given scene.

However, to the best of our knowledge, there is nomethod that considers character motions in a scene as theprevious camera control methods are used for general 3Dscenes.

∗This work was supported by the IT R&D program of MIC/IITA [Devel-opment of Computational Photography Technologies for Image and VideoContents].

The previous methods deal with user-defined con-straints or cinematographic techniques with the semanticmeaning of a scene rather than the character motion itself.However, we have to consider character motions in a sceneas it is obvious that character motions can be shown indiverse ways depending on the viewing direction.

Furthermore, sometimes it is necessary to see a charac-ter motion without considering a scenario such as a char-acter viewer in 3D games and a scene exploration forcharacter motion. For these reasons, our research is mo-tivated by the following question: Is there a quantitativemeasure for selecting good camera parameters by consid-ering only the character motion?

In this paper, we propose a quantitative and intuitivemeasure for determining parameters for camera controlthat considers only the character motion. Our basic idea isto approximately compute the swept area generated by themotion of character’s links that are orthogonally projectedinto the image plane, which we call “Motion Area”. Thelarger the motion area value, the greater is the dynamic

Page 2: Determination of camera parameters for character motions using motion area

476 J.-Y. Kwon, I.-K. Lee

view of the motion. In other words, by using motion areavalues, we can measure how dynamically a character is tobe shown in a screen for given camera parameters. Thus,the motion area can be useful as one of many heuristics forchoosing good camera parameters for a scene with char-acter motion. We can also effectively exploit this measureto find a good viewpoint of character motion exploration,which is the case that we have to consider only the motionof a character.

The rest of this paper is organized as follows. We re-view related work in Sect. 2 and introduce motion areain Sect. 3. In Sect. 4, we explain how motion area is usedin camera control for simple cases.

Some experimental results and comparisons are pre-sented in Sect. 5, and we discuss our work and draw con-clusions in Sect. 6.

2 Related work

Christie et al. [5] introduced the state of the art auto-mated camera planning techniques. They reported that themethod for automatic virtual camera control could be cat-egorized by the expressivity of the set of properties and thecharacteristics of the solving mechanisms.

We will review the previous methods related to ourwork on the basis of the criteria which they exploit forestimation of the visual quality.

There have been many studies on finding good cam-era control parameters by satisfying several user-definedconstraints. Blinn [2] used algebraic techniques for de-ciding proper values of low-level camera parameters witha given scene and the desired actor placements. Druckerand Zeltzer [7] proposed the CamDroid system whichcould be used for controlling a virtual camera in widelydisparate types of graphical environments. In this system,the camera is first set up with optimal position for in-dividual shots subject to some constraints, and then thecamera parameters are automatically tuned for a givenshot based on a general-purpose continuous optimizationparadigm. Bares and his colleagues [1] used a constraintsolver to find a solution to various requirements of cameraviewing conditions imposed by users, whereas Halper andOlivier [11] used a genetic algorithm to find good cam-era placement. Halper et al. [10] proposed a camera en-gine for computer games, which paid particular attentionto a trade-off between constraint satisfaction and frame-to-frame coherence. Gleicher and Witkin [8] introducedthrough-the-lens camera control that allowed users to con-trol the camera with features in the images and Kyung andhis colleagues [14] improved the said method by formu-lating the problem as a constrained non-linear inversionproblem. Christie et al. [6] used the semantic space par-titioning approach which was similar to the binary spacepartitioning, and tried to isolate identical possible solu-tions in 3D volumes with respect to their visual properties.

Many idiom-based approaches also have been studiedduring the last decade. He and his colleagues [12] pro-posed a system called Virtual Cinematographer that usedthe concept of the idiom. The system treats shots and shottransitions as states and arcs of a finite state machine, re-spectively. Christianson et al. [4] proposed a method thatused a declarative language that could formalize severalprinciples of cinematography. Tomlinson et al. [21] pro-posed a behavior-based autonomous cinematography sys-tem by encoding the camera as a creature. This creaturehad motivational desires, controlling camera and lightingin order to augment emotional content. Kennedy and Mer-cer [13] employed concepts of experts system. Lin and hiscolleagues [17] encapsulated the principles of camera con-trol in real-time 3D games and the concepts of constraintand cinematography into camera modules.

Some researchers developed their own methods tomeasure the visual quality and exploited them to findgood camera parameters. Gooch et al. [9] reported a use-ful criterion for determining a viewpoint from the resultof some perceptual researches, which was called a canoni-cal viewpoint. They exploited this criterion and additionalheuristics of artists to find a good viewpoint. Sokolovand Plemenos [19] introduced the method to measurea viewpoint quality based on the total curvature of visi-ble surfaces of given objects. They used it not only to finda good fixed viewpoint for a given scene, but also to searcha good path for scene exploration [20]. Lee et al. [16] pre-sented the scheme of mesh saliency that was similar toimage saliency. They showed that the mesh saliency couldalso be used for finding the good viewpoint for a givenmesh.

The camera control methods described above can auto-matically generate the appropriate camera control param-eters for a given scene and situation, but not specificallyfor the motion of characters. We propose a novel methodto measure a visual quality of a moving articulated charac-ter. Our method can be viewed as one of many heuristicsfor measuring a visual quality. Thus, it can be combinedwith any other camera planning method. Furthermore,when we have to consider only the character motion suchas a motion exploration, this measure can be effectivelyapplied to find a good viewpoint of character motion ex-ploration.

3 The motion area

We define the motion area as the projected area ontoa view plane swept by all links of a character. To calculatethis, we have to figure out a projection matrix of a cam-era and a character motion. In this paper, we will onlyconsider the case of using an orthogonal projection matrixbecause of its simplicity. (This does not mean that we onlytreat the orthogonal projection applications, but we onlyuse the orthogonal projection setting for simple computa-

Page 3: Determination of camera parameters for character motions using motion area

Determination of camera parameters for character motions using motion area 477

tions which can be an effective approximation of motionarea under the perspective projection.)

Note that our method can be an approximated ver-sion of the relative motion area in the case of perspectiveprojection. If we want to use perspective projection withvarious camera-target distances and field of view (FOV)angles, the absolute magnitude of the motion area cannotconvey the useful meaning because it is trivial that a smallcamera-target distance and a large FOV angle make a largemotion area value.

A projection plane in the orthogonal projection is onlydependent on a viewing direction. The projection ma-trix P(v) derived from the viewing direction v can beexpressed as P(v) = [v1 v2]T , where v1 and v2 are basisvectors of the projection plane, which is perpendicularto v.

Let a character C be a pair of sets J, L, where J isa set of N joints ji | i ∈ [1, N] of the character and Lis a set of index pairs (a, b) | a, b ∈ [1, N] which meanstwo joints ja and jb define a link ( ja, jb) where ja isa parent of jb in the joint hierarchy of the character.Each joint ji can be denoted as a curve ji(t) that mapsa time t ∈ [0, T ] to a position in the world coordinates.As shown in Fig. 1, a pair of curves ja(t) and jb(t),(a, b) ∈ L, and two line segments from the link at the spe-cific time t0 and t1 form a surface S(a,b)(t0, t1) with areaA(a,b)(t0, t1). The motion area A(v, t0, t1), the sum of thearea of S(a,b)(t0, t1) projected onto the plane P(v) for alllinks, is defined as follows:

A(v, t0, t1) = 1

t1 − t0

(a,b)∈L

Av(a,b)(t0, t1), (1)

where Av(a,b)

(t0, t1) is the projected area of S(a,b)(t0, t1),which is computed by first projecting the link ( ja, jb) withthe projection matrix P(v) and sweeping the projectedlink in terms of the given motion from t0 to t1. We usethe sum of motion area values divided by the time in-terval for computing the average motion area value pertime.

Let pvi (t) be the projected curve of ji(t) by P(v),

that is, pvi (t) = P(v) ji(t). Assuming that |t1 − t0| is small

Fig. 1. Motion area: the area of the region swept by a link of a char-acter

enough, then we can approximate Av(a,b)

(t0, t1) as an areaof a rectangle with four points pv

a(t0), pva(t1), pv

b(t0),and pv

b(t1) (shown as a green rectangle in Fig. 1). Asa rectangle’s area can be computed by the sum of thearea of two triangles (a green dashed line in Fig. 1),Av

(a,b)(t0, t1) becomes

Av(a,b)(t0, t1) = 1

4(|ATP(v)TXP(v)B|2

+|CTP(v)TXP(v)D|2), (2)

where

A = pb(t0)− pa(t0), B = pa(t1)− pa(t0),C = pb(t0)− pb(t1), D = pa(t1)− pb(t1), and

X =[

0 −11 0

].

Note that we use the sum of the squared area of twotriangles instead of the absolute area for computationalsimplicity. In our experiments, the character motions con-sist of 30 frames per second. Thus, we set |t1 − t0| as 1

30second.

An area of triangle is equal to a half magnitude ofa cross product vector of two segment vectors of the tri-angle. So, Eq. 2 can be converted into:

Av(a,b)(t0, t1) = 1

4(|(A × B) ·v|2 +|(C × D) ·v|2). (3)

Equation 3 can be simplified as a quadratic form of vas follows:

Av(a,b)(t0, t1) = 1

4(|(A × B) ·v|2 +|(C × D) ·v|2)

= 1

4vT ((A × B)(A × B)T

+ (C × D)(C × D)T )v

= vTA(a,b)(t0, t1)v. (4)

Therefore, the motion area A(v, t0, t1) can also be for-mulated as a quadratic form:

A(v, t0, t1) = 1

t1 − t0

(a,b)∈L

Av(a,b)(t0, t1)

= 1

t1 − t0vT

( ∑

(a,b)∈L

A(a,b)(t0, t1))v

= 1

t1 − t0vTA(t0, t1)v. (5)

We can expect that the character motion projected ontothe view plane becomes more dynamic as the motion areabecomes larger. Thus, the motion area can be used as a dy-namic quantity of the screen with respect to the viewingdirection.

Page 4: Determination of camera parameters for character motions using motion area

478 J.-Y. Kwon, I.-K. Lee

4 Automatic camera control using motion area

In this section, we propose three automatic camera controlalgorithms as the extension of the motion area describedin Sect. 3. Note that all of the algorithms we propose con-sider only the character motion during the process. Thus,they cannot find an appropriate solution which considerssemantic meanings of a given motion or cinematographi-cal effects. However, they are fairly helpful to find a dy-namic viewpoint for some applications such as a motionexploration.

4.1 Determination of fixed camera parameters

Using a motion area, we can find good fixed camera par-ameters for a given character motion. It can be archived bysolving the optimization problem that maximizes a motionarea of a character motion. Equation 6 is the mathematicalrepresentation of the optimization problem.

maximize O(v) = vTA(0, T )v subject to vTv = 1. (6)

Equation 6 can be easily optimized by finding the eigen-vector of the largest eigenvalue of A(0, T ) [15].

Figure 2 shows two examples of using our algorithmfor finding fixed camera parameters. The time flow is rep-resented as a change of color from red to blue. The figureillustrates how our algorithm automatically finds cameraparameters that maximize motion area, which is very help-ful in observing subtle character movements.

One may use another criterion to determine good cam-era parameters for a dynamic scene. The bounding boxof the character can be considered as a good heuristic formeasuring the importance of a given object [20]. Whenwe try to find the camera parameters to maximize thearea of the bounding box projected onto the image plane,however, it will not sometimes find appropriate cameraparameters for a dynamic scene. For example, imaginethat a character swings his or her sword horizontally(see Fig. 3). If we use the viewpoint that maximizes theprojected area of the bounding box, the side of the char-acter will be displayed and the dynamic effect that occursfrom the swing motion will weaken, because it will bemaximized at the side view when the sword is locatedin the rear and the front of the character is like Fig. 3aand b. But, our method can catch the swing motion ef-fectively. Thus, it will recommend the high- or low-angle

Fig. 2. Two examples of finding fixed camera parameters

shot in order to maximize the dynamic effect as we can seein Fig. 3c.

4.2 Off-line camera path generation for a given motion

By applying the optimization method described in Sect. 4.1to a short time interval, we can generate a smooth camerapath for a given motion. The basic idea of the algorithmis to determine a camera direction for each frame. How-ever, it can cause a jagged camera if we naively use thesequence of a camera direction resulting from the opti-mization method.

Thus, we smoothly blend the viewing directions usingthe motion area value as a blending weight. Figure 4shows the algorithm.

The camera direction of each frame is initially de-termined by solving the optimization problem describedin Sect. 4.1. However, when a character rarely moves ina given time interval, the computation can fail to find anappropriate viewing direction because the motion area ma-trix is nearly zero in such a case. Thus, a pre-definedthreshold T is used as the minimum amount of the motionarea value to guarantee that the selected initial viewingdirection is meaningful. If the computed motion area valueis too small, the algorithm takes a larger range of motion

Fig. 3a–c. Swinging motion example

Fig. 4. Off-line camera path generation algorithm using a weightedsum

Page 5: Determination of camera parameters for character motions using motion area

Determination of camera parameters for character motions using motion area 479

than in the previous step by increasing the range level l.After selecting the initial viewing directions of the wholesequence, the algorithm computes ideal camera directionsfor all frames by the weighted averaging of the initialsequence of camera directions with motion area valuesas weight values. The weight values are helpful in com-puting more ideal camera directions that show a largermotion area of the character motion. Note that the smooth-ness of the camera direction sequence can be controlledwith a user-defined coefficient k that represents how manyneighboring frames are blended.

4.3 Real-time camera path generation

The optimization problem for a short time interval de-scribed in Sect. 4.2 can be solved very quickly by theeigenvalue decomposition enabling us to generate a cam-era path for a given motion on the fly in real-time. Fig-ure 5 describes the pseudocode of the algorithm. Insteadof computing the new camera rotation parameter at everyframe, the algorithm determines the new camera directionusing optimization at every δt interval which is largerthan ∆t—the time interval between two adjacent frames.The camera directions in the middle frames are computedusing SLERP interpolation [18] between two quaternionsrepresenting camera directions.

In the above algorithm, Ω(A) is the function that con-trols the blending weight of the camera parameter withrespect to the motion area:

Ω(A) =

⎧⎪⎨

⎪⎩

0, A < Alow12

(1− cos

(A−Alow

Ahigh−Alowπ))

, Alow ≤ A < Ahigh

1, A ≥ Ahigh.

As we have mentioned in Sect. 4.2, in case of a verysmall motion area value that has nearly no movement, the

Fig. 5. Real-time camera path generation algorithm

viewing direction obtained from the optimization prob-lem can be meaningless. This case can be prevented byusing Ω(A).

The pseudocode described above only manages thecamera rotation parameter. We can choose the position at tof the root joint of the character as the camera look-at pos-ition, then the camera look-from position is automaticallycomputed in the manner of maintaining the pre-defineddistance between the camera look-from and look-at pos-itions. The Gaussian filtering of the series of root-jointpositions can be helpful in preventing unwanted shakingof the camera.

4.4 Additional constraints for camera control

The viewing direction selected by our algorithm cansometimes cause too high- or too low-angle shots, whichis not good for showing the motion except for specialcases. For example, when the character is lying down ona floor, a very high-angle camera is needed. However, thehigh- or low-angle shot usually should be prevented ingeneral. We can prevent this situation by slightly modify-ing the optimization problem as follows:

maximize O(v) = vTA(t, tnext)v−ω|v2|2subject to vTv = 1,

where ω is the user-defined weight coefficient and v2 isthe y-axis element of v. Note that by minimizing the v2value, we can compute a more “flat” viewing direction thatis more parallel to the ground plane. The equation can besimplified as:

O(v) = vTA(t, tnext)v−ωvT

[0 0 00 1 00 0 0

]v

= vT(A(t, tnext)−ωH)v, (7)

where the objective function is still in quadratic form thatcan be easily solved.

In order to avoid the situation where the camera look-from position is located under the floor (see Fig. 6), we

Fig. 6. Correcting the camera look-from position with the floor con-straint

Page 6: Determination of camera parameters for character motions using motion area

480 J.-Y. Kwon, I.-K. Lee

use the following simple correction algorithm. There canbe two candidates of corrected look-from positions thathave different criteria in Fig. 6. Position A preserves theviewing direction selected by our algorithm, whereas pos-ition B preserves the pre-defined distance between cameralook-from and look-at positions. We simply interpolatethese two points with a user-defined blending parameterf ∈ [0, 1].

5 Results and comparison

We tested the effectiveness of our algorithms on severalsequences of motion capture data. The environment usedfor the test consisted of a 2.13 GHz Intel Core2 CPU with2 Gb of memory and GeForce 7600 GT.

Figure 7 shows the comparison between the viewingdirections that are generally used and the viewing dir-ection using our fixed camera parameter selection algo-rithm for two motions. The time flow is represented bythe same method used in Fig. 2. We can observe that ourfixed camera can always display the whole motion of thecharacter without any serious problem, whereas severalmotions cannot be appropriately observed using generalcameras such as the View 2 and 5 in Fig. 7c, and the view 3in Fig. 7e.

Figure 8 shows some snapshots captured by the cam-era generated using our off-line and real-time camera pathgeneration algorithm for several motions. In our experi-ments, we set a minimum of motion area value T to 0.02.We can observe that the special dynamic features of mo-tions such as capoeira actions in Fig. 8b and c and swing-ing arms in Fig. 8e and f are effectively displayed by the

Fig. 7a–e. Comparison between our results and the others: the viewing directions in the first column are computed by our fixed cameraparameter selection algorithm, and others are selected by a skillful animator or traditional views such as the left and the front view

smoothly moving camera. Note that the resulting cam-era paths using the off-line and real-time algorithms looksimilar. However, the off-line cameras change view morequickly than real-time cameras as the real-time algorithmcomputes the new camera parameter every several frames,whereas off-line cameras update the parameters more fre-quently.

Table 1 describes the average motion area value perframe of the examples in Fig. 8. We can see that ourmethod finds a viewing direction with maximum motionarea, and it is helpful in selecting the viewing directionthat displays the motion of links more dynamically.

We performed experiments to test whether our al-gorithms based on motion area successfully reflect thedynamic features of motions perceived by human view-ers. We showed five video sets of character motions(see Fig. 7) to nine adult human viewers with no priorknowledge of the research. Each video set had six differentfixed viewing directions: one of these viewing directionswas computed by our fixed camera parameter selection al-gorithm, and others were selected by a skillful animator orwere traditional views such as the left and the front view.Participants in the study were then asked to vote on threedifferent viewing directions that look more dynamic thanothers.

Table 2 shows the result of the votes. We can ob-serve that the viewing directions selected by our algorithmtend to get the largest score of the scene. Thus, we canexpect that our algorithm is suitable for selecting view-ing directions that can make the scene of motion moredynamic.

Our methods can be easily combined with other tech-niques for avoiding occlusion. In our experiment, we use

Page 7: Determination of camera parameters for character motions using motion area

Determination of camera parameters for character motions using motion area 481

Fig. 8a–f. Two examples of generating the camera path: a–c are the camera paths for the capoeira motion generated by the user-controlledcamera, the off-line camera path algorithm, and real-time camera path algorithm, respectively, and d–f illustrate the dance motion in thesame manner

Table 1. Average motion area per frame of each example in Fig. 8

Fig. 8a Fig. 8b Fig. 8c Fig. 8d Fig. 8e Fig. 8f

1.4597 2.5551 2.0944 0.3994 0.5387 0.4657

Table 2. Score of various views of the motions. The italic-typescore represents the highest score in each row

ours View 1 View 2 View 3 View 4 View 5

Fig. 7a 7 4 3 5 2 6Fig. 7b 8 4 3 4 5 3Fig. 7c 5 7 3 5 6 1Fig. 7d 7 3 7 1 3 6Fig. 7e 6 6 6 3 3 3

a simple collision detection algorithm to locate the cam-era in occlusion-free zones. For a real-time camera pathgeneration method, if the target is occluded, the cameralook-from position is translated to be in front of the obs-tacles at each frame. For an off-line camera path gener-ation method, we first compute the camera path withoutconsidering obstacles, then we modify the path to avoidobstacles. As an abrupt change of camera position is notnatural, we take several neighboring camera positions andsmoothly edit the camera path curve using a techniquesimilar to motion displacement mapping [3]. The modi-fication is repeated until there is no more occlusion de-

Fig. 9a–d. Example of avoiding occlusions: a the top view of theexample scene and camera path. The red curve is the camera pathcomputed using our off-line algorithm (without avoiding occlu-sion), the green curve is the corrected path to avoid occlusionby the displacement mapping, and the blue curve is the real-timecamera path. b–d are some snapshots captured by each camerarespectively

tected. Figure 9a shows the camera trajectories generatedby our off-line and real-time camera path generation al-gorithm, which are corrected to avoid occlusions. Somesnapshots using computed camera parameters are illus-trated in Fig. 9c and d. We can observe that the cameraavoids the occlusions by obstacles, whereas the cameraparameters effectively display the dynamic character mo-tion. Except for the iterative correction method used forthe off-line camera path, all of our methods are suffi-ciently fast for real-time execution. Note that the dance

Page 8: Determination of camera parameters for character motions using motion area

482 J.-Y. Kwon, I.-K. Lee

motion in Fig. 8 is a very long sequence consisting of 6651frames, but the time needed for generating the camera pathwith our off-line algorithm is about 1 second.

6 Discussion

In this paper, we introduced a quantitative measurementfor estimating how dynamic a character motion is in givencamera parameters, and proposed several methods forautomatically controlling the camera displaying charactermotions. Each method produces slightly different camerapath results, and we can choose one of those methods inaccordance with the application to which the method isapplied. Furthermore, the proposed algorithm for avoid-ing occlusions guarantees that the camera can capture thetarget without any occlusion by obstacles.

The motion area has some benefits for measuring thequality of viewing character motion on the screen. Thismethod can be applied to any character regardless of thejoint hierarchy, even if the character is not a human-likecharacter. The method can also be simultaneously appliedto multiple characters. To effectively use our method fora scene with multiple characters, we have to consider theocclusions that may be caused by the characters them-selves. Although the time for computing the motion areaof multiple characters may increase, we expect that themethod can be improved with a GPU-based implementa-tion.

Our method has several limitations and requires fur-ther work in the future. First, a perspective projection isnot considered during computation of motion area valuesfor simplicity of computation. If we use a perspective pro-jection for computing motion area values, the DOF of

the function of the motion area increases to six: threefor position and three for camera rotation, even if we fixthe amount of the FOV angle. Therefore, the optimizationproblem for determining the maxima of the motion areawill become much harder than our problem setting. How-ever, it should be considered for a more accurate computa-tion than the current method. One of the possible methodsis the use of an image-precision algorithm similar to thatof Halper et al. [10]: rendering the trajectories of charac-ter links and counting the rendered pixels to evaluate howvisible is the character motion.

Second, as we have mentioned in Sect. 4, our methodcurrently considers only the motion of a character whendeciding on camera parameters, thus, the semantic mean-ing and cinematographical constraints of a given scene arenot considered.

We expect that we can effectively control the camerafor a given scene with a character motion by coupling oursand other methods that deal with the meaning of the scene.

Our algorithm focuses on finding appropriate viewingdirections. Other camera parameters such as camera look-from and look-at positions and an up-vector are simplydefined according to the viewing direction. We believe thatthe viewing direction of the camera has the greatest impacton the resulting displayed scenes. However, other param-eters should be controlled more carefully for better camerapath results.

Our simple algorithm for avoiding occlusions can gen-erate an unnatural camera path in a scene that has manycomplex obstacles. During the iteration for correcting thedistance with a displacement mapping, the resulting cam-era path can have a very short distance between the cam-era look-from and look-at positions. This problem can besolved by using a more sophisticated method for occlusionavoidance.

References1. Bares, W., Thainimit, S., McDermott, S.:

A model for constraint-based cameraplanning. In: Smart Graphics. Papers fromthe 2000 AAAI Spring Symposium,pp. 84–91. AAAI Press, Menlo Park, CA(2000)

2. Blinn, J.: Where am i? what am i lookingat? IEEE Comput. Graph. Appl. 8(4),76–81 (1988)

3. Bruderlin, A., Williams, L.: Motion signalprocessing. In: Proceedings of ACMSIGGRAPH ’95, pp. 97–104. ACM Press,New York, NY (1995)

4. Christianson, D.B., Anderson, S.E.,He, L.-W., Salesin, D., Weld, D.S.,Cohen, M.F.: Declarative camera controlfor automatic cinematography. In:AAAI/IAAI, vol. 1, pp. 148–155. MenloPark, CA (1996)

5. Christie, M., Machap, R., Normand, J.M.,Olivier, P., Pickering, J.: Virtual camera

planning: a survey. In: SMARTGRAPH’05: Proceedings of the 5th InternationalSymposium on Smart Graphics, pp. 40–52.Springer (2005)

6. Christie, M., Normand, J.M.: A semanticspace partitioning approach to virtualcamera composition. Comput. Graph.Forum 24, 247–256 (2005)

7. Drucker, S.M., Zeltzer, D.: Camdroid:a system for implementing intelligentcamera control. In: SI3D ’95: Proceedingsof the 1995 Symposium on Interactive 3DGraphics, pp. 139–144. ACM Press, NewYork, NY (1995)

8. Gleicher, M., Witkin, A.: Through-the-lenscamera control. Comput. Graph. 26(2),331–340 (1992)

9. Gooch, B., Reinhard, E., Moulding, C.,Shirley, P.: Artistic composition for imagecreation. In: Eurographics Workshop onRendering, pp. 83–88. Springer (2001)

10. Halper, N., Helbing, R., Strothotte, T.:A camera engine for computer games:Managing the trade-off between constraintsatisfaction and frame coherence. In: Proc.Eurographics 2001, vol. 20, pp. 174–183.Blackwell Publishing, Oxford, UK, Malden(2001)

11. Halper, N., Olivier, P.: Camplan: A cameraplanning agent. In: AAAI Workshop onSmart Graphics, pp. 92–100. AAAI Press,Menlo Park (2000)

12. He, L.-W., Cohen, M.F., Salesin, D.H.: Thevirtual cinematographer: a paradigm forautomatic real-time camera control anddirecting. In: Proceedings of ACMSIGGRAPH ’96, pp. 217–224. ACM Press,New York, NY (1996)

13. Kennedy, K., Mercer, R.E.: Planninganimation cinematography and shotstructure to communicate theme and mood.In: Proceedings of the 2nd International

Page 9: Determination of camera parameters for character motions using motion area

Determination of camera parameters for character motions using motion area 483

Symposium on Smart Graphics, pp. 1–8.ACM Press, New York, NY (2002)

14. Kyung, M.H., Kim, M.S., Hong, S.J.:A new approach to through-the-lens cameracontrol. CVGIP: Graph. Model ImageProcess. 58(3), 262–285 (1996)

15. Lay, D.C.: Linear Algebra and ItsApplications, 3rd edn. Addison-Wesley,Boston, MA (2002)

16. Lee, C.H., Varshney, A., Jacobs, D.W.:Mesh saliency. ACM Trans. Graph. 24(3),659–666 (2005)

17. Lin, T.C., Shih, Z.C., Tsai, Y.T.: Cinematiccamera control in 3d computer games. In:SHORT Communication Papers

Proceedings of WSCG ’04, pp. 289–296.UNION Agency-Science Press, Plzen,Czech Republic (2004)

18. Shoemake, K.: Animating rotation withquaternion curves. In: Proceedings of ACMSIGGRAPH ’85, pp. 245–254. ACM Press,New York, NY (1985)

19. Sokolov, D., Plemenos, D.: Viewpointquality and scene understanding. In: The6th International Eurographics Symposiumon Virtual Reality, Archaeology andCultural Heritage (VAST ’05), pp. 67–73.Eurographics Association, Switzerland(2005)

20. Sokolov, D., Plemenos, D., Tamine, K.:Viewpoint quality and global sceneexploration strategies. In: InternationalConference in Computer Graphics andApplications (GRAPP ’06), pp. 184–191(2006)

21. Tomlinson, B., Blumberg, B., Nain, D.:Expressive autonomous cinematography forinteractive virtual environments. In:AGENTS ’00: Proceedings of the FourthInternational Conference on AutonomousAgents, pp. 317–324. ACM Press, NewYork, NY (2000)

JI-YONG KWON received his B.S. degree inComputer Science from Yousei Universityin 2005. He is currently a Ph.D. candidate inComputer Science at Yonsei University. Hisresearch interests include non-photorealisticrendering and animation, character animationand geometric modeling.

IN-KWON LEE received his B.S. degree inComputer Science from Yonsei University in1989 and earned his M.S. and Ph.D. in Com-puter Science from POSTECH in 1992 and1997, respectively. Currently, he is teaching andresearching in the area of computer animation,geometric modeling and music technology inYonsei University.