Realtime Painterly Rendering for Animation

download Realtime Painterly Rendering for Animation

of 6

Transcript of Realtime Painterly Rendering for Animation

  • 8/3/2019 Realtime Painterly Rendering for Animation

    1/6

    Realtime Painterly Rendering for Animation

    Daniel Sperl

    Hagenberg College of Information Technology, Austria

    Abstract

    This paper describes a realtime system for render-ing animations in a painterly style. Impressionisticimages are created using a large quantity of brushstrokes, organized in particle systems to achieve

    frame-to-frame coherence. Reference pictures areused to compute the properties of each stroke.

    The basis of the presented technique is [Meier,B. J.: Painterly Rendering for Animation. In:Proceedings of SIGGRAPH 96, p. 477484, 1996],which describes a system for offline-rendering. Thesystem presented there was modified for being ap-plicable in realtime environments by extensive useof modern 3D hardware. The possibility to sendinstructions to the graphics card and execute themeither p er vertex or per pixelusing vertex andpixel shadersallows both the rendering of thou-

    sands of brush strokes per frame, and the correctapplication of their properties. Programming thegraphics hardware is done with the high level shad-ing language Cg. Fast access to reference picturesis achieved by using OpenGL extensions that allowdirect rendering to textures.

    The results are very promising. Although not allaspects of the original painterly algorithm were in-cluded, the rendered images provide an idea of whatis possible using this technique. With a few moreoptimization steps it should be possible to utilizethe presented technique in complex environments.

    1 Introduction

    The main application of computer graphics hasalways been the rendering of photorealistic im-agery. A tremendous amount of research has beendone in this area, and the majority of problemshas been solved by now. In contrast to this,non-photorealistic rendering (NPR) is a relativelyyoung field of research. Nevertheless, many al-

    Figure 1: This image was rendered in realtime usinga prototype of the described system.

    gorithms have been developed in the last decadethat create images, which resemble art made byhumans. The fast progress of graphics hardwareaffordable for consumers allows more and more ofthese algorithms to be processed in realtime (e. g.[2, 4, 5, 6, 8]).

    In paintings, a scene represents an artists viewof the world. All the information he wants to con-vey with his image has to be assembled by strokesof a brush. The attributes of each stroke can affectvarious characteristics of the image. The size of astroke determines the maximal detail of an object,direction and color describe the quality of the sur-face. By choosing these attributes, the artist canemphasize the parts of the scene he considers mostimportant, or can create a certain mood and atmo-

    Page 1

  • 8/3/2019 Realtime Painterly Rendering for Animation

    2/6

    sphere.The most important elements of this procedure

    can be emulated in a computer, some of them evenin realtime applications. However, some advancedrendering technology has to be used, e. g. vertex andpixel shaders. Even then, several tradeoffs betweenimage quality and rendering speed have to be made.

    The system described here uses programmablegraphics hardware. This is accomplished using thehigh level shading language Cg. To understand theprinciples of this language, a basic knowledge ofvertex and pixel shaders is required.1 Basically, avertex shader (also known as vertex program) isa program that is executed for each vertex of anobject, and can compute advanced lighting calcula-tions or surface transformations (e. g. the movementof waves in water). A pixel shader (also known asfragment program) can be used to determine thecolor of a fragment2, since it is executed for each

    pixel of a polygons surface. A common applicationfor a fragment program is per-pixel-lightning (e. g.dot3 bump mapping).

    2 Related work

    The presented system was inspired mostly by [3]and [7]. In [3], Haeberli describes a system for creat-ing painterly images by using brush strokes, whichobtain their attributes (position, color, size, etc.)from reference pictures containing photographic orrendered images. The use of reference pictures sim-plifies the creation of artistic images and allows fastand direct control of the process.

    The brush strokes of Haeberlis system are placedin screen space, which is sufficient for still imagesbut leads to a shower door effect in animated se-quences. In [7], this problem is addressed by usingparticle systems to control the placement of brushstrokes. The particles are rendered using 2D tex-tures, which are first sorted by the distance to theuser and then blended with each other.

    3 The algorithmTo convert the system of Barbara J. Meier [7] torealtime applications, the rendering of the brushstrokes has to be accelerated. This is achievedmainly by using Cg, nVIDIAs high-level shadinglanguage [1]. Creation of the reference pictures is

    1Sufficient information on Cg can be found in [1].2Before a pixel is written into the framebuffer, it is called

    fragment

    achieved by using several OpenGL-extensions thatallow direct rendering to textures [10, 9].

    The graphics-pipeline of the described system isshown in figure 2. It is an extended version ofthe pipeline presented in [7]. A prototype of thedescribed system was implemented using OpenGL.It can be downloaded from the following website:http://www.incognitek.com/redge/painterly.

    The rendering process consists of three steps:

    1. Before the actual rendering steps, some prepro-cessing is needed. The polygon meshes are con-verted into particle systems. Those attributesof the brush strokes that remain constant ineach frame can be computed simultaneously.

    2. The rendering itself takes at least of two passes.During the first pass, two reference pictures,which contain color and depth information, arerendered.

    3. In the second pass, the brush strokes are ren-dered. Therefor, each particle is represented bya billboard. This is the most expensive part ofthe rendering process, since a large number ofstrokes needs to be processed.

    3.1 Preprocessing

    In the preprocessing step, the polygonal objects ofthe scene are used as a basis for creating particles.These particles will be used later to position the

    brush strokes which compose the image.The particles are placed within the polygons ofthe mesh. Depending on the area of the polygonand a user-defined value for density, a certain num-ber of particles is placed within each polygon. Inthe described implementation, a simple algorithmfor distributing the particles randomly on the sur-face is used.

    The number of particles created here representsthe maximum that can be rendered later. However,only a fraction of them will be needed in an averageframe. The actual number depends on the distanceof the camera to each surface and the size of the

    brush strokes.

    3.2 First pass

    The polygon-meshes of the objects are not onlyneeded for particle placement, but also for the ren-dering of reference pictures. Two pictures are cre-ated in the described system: one providing infor-mation on color, the other about depth (exampleimages can be seen in figure 2). The pictures have

    Page 2

  • 8/3/2019 Realtime Painterly Rendering for Animation

    3/6

    Figure 2: The pipeline of the painterly renderer.

    to be created as efficiently as possible and shouldbe stored within textures to allow fast access duringthe second pass.

    There are two possibilities to create these tex-tures. One would be to render the geometry intothe frame- and z-buffer (using common OpenGL-functions) and copy the content of these buffersinto textures. However, copying the data is notvery efficient. For this reason, another approachis used. There are some OpenGL extensionswhich allow direct rendering to textures (includ-ing ARB render texture and WGL ARB pbuffer).These extensions allow the creation of additionalrendering contexts.3 In the presented implementa-tion, they proved to be useful.

    Unfortunately, some of the used extensions onlywork on graphics hardware from nVIDIA. It shouldbe possible, however, to overcome this restriction.

    It would also be possible to determine the at-tributes of the brush strokes directly during render-ing (e. g. in the vertex program). The advantage ofusing reference pictures, however, is the high flex-ibility. In theory, any arbitrary shading technique

    3This is normally used for effects like realtime reflections,etc.

    could be used in the first pass. It does not mat-ter whether normal Gouraud shading or complexshadow- and reflection-algorithms are usedall the

    provided information will be assigned directly tothe brush strokes.

    3.3 Second pass

    The second rendering pass is the most expensiveone. Thousands of brush strokes are rendered si-multaneously, which can be accomplished by pro-gramming the graphics card directly, and by severaloptimizations.

    The final image is composed of nothing but thebillboards which represent the strokes. Neverthe-less, the particles are still grouped within the poly-gon on which they were created. This turns out tobe helpful in several steps of this rendering pass.

    3.3.1 Culling

    Before the particles are rendered, the typical op-timization-steps should be performed, such as back-face culling and clipping. This can save a lot of pro-cessing time and has be done manually. The reasonfor having to do this is that some optimizations,

    Page 3

  • 8/3/2019 Realtime Painterly Rendering for Animation

    4/6

    which are normally done by the graphics card, donot work in the painterly renderer. The scene ismade up exclusively by billboards, which arebydefinitionfront facing. So backface-culling, for ex-ample, cannot be performed by the graphics card.However, the polygon-meshes from which the parti-cles were created do have backfacing polygons, andthese can be used to cull all particles placed withinthem.

    If all optimizations are executed with the poly-gons of the original geometry, and only the particlesof the polygons assumed to be visible are drawn, theprocessing time can be reduced to a fraction of thetime otherwise necessary.

    3.3.2 Variation of particle density

    In a real painting, small objects are drawn with fewbrush strokes. The maximum level of detail is pre-

    determined by the size of the brush. The painterlyrenderer should resemble this behavior.

    When using polygon meshes, automatic modifi-cation of the level of detail is a complex process. Us-ing the painterly renderer, on the contrary, makesthis comparatively easy. To decrease the detail, itis only necessary to reduce the number of renderedstrokes.

    The required number of strokes per polygon isdependent on three things: the size of the brushstroke, the area of the polygon in screen spaceand the desired stroke-density. If these values are

    known, the optimal number of strokes per polygoncan be calculated and used when the strokes aredrawn.

    While the size of each brush stroke and the de-sired density are defined through user input, thepolygons area in the output image has to be ap-proximated. This can be accomplished using theactual size of the polygon (in world space), itsdistance to the viewer and the angle between itsnormal-vector and the view-vector.

    The density is calculated in each frame and foreach polygon. Therefor, when the camera moves

    through the scene, the number of particles smoothlychanges to the appropriate amount.

    3.3.3 Creation of billboards

    Each brush stroke is represented by a textured bill-board. The coordinates of the billboards verticescould be created on the CPU and sent to the graph-ics card, but since vertex programs are needed any-way, it is faster to create the billboard directly on

    (a)

    (b)

    Figure 3: When there are too few particles availablefor a certain perspective (a), the strokes are scaled

    proportionally (b).

    the GPU. In order to accomplish this, the coordi-nate of the center of the billboard (= the coordinateof the particle) is sent to the vertex program fourtimes, accompanied by the correct texture coordi-nates needed for the brush texture. These two co-ordinates can be usedin combinationto createeach corner of the billboard. Since vertex programsneed the coordinate of the vertex in clip-space, thetwo sets of coordinates just have to be added to-

    gether (in addition to adding an offset and multi-plying with the desired size).

    Each billboard should always have the same sizein screen space, independent from the distance it isbeing viewed from. Using a perspective view, an ob-ject that is further away from the viewer is normallydisplayed smaller than a closer one. Internally, thisis accomplished by a perspective division. In thisprocess, the x- and y-coordinates of a vertex (inclip space) are divided through their distance fromthe viewer. To compensate for this process (whichcannot be avoided presently), the affected coordi-

    nates are simply multiplied with the same value inthe vertex program, thus reversing the division.

    However, the process described above can leadto holes in the surface of the objects if the camerais very close to a surface, because there may nothave been created enough particles during the pre-processing step. This can be compensated for byre-enabling the perspective division at exactly thedistance when too few particles are available (fig-ure 3).

    Page 4

  • 8/3/2019 Realtime Painterly Rendering for Animation

    5/6

    Figure 4: A reference texture is used to computethe color of each brush stroke.

    3.3.4 Accessing reference textures

    The color of each brush stroke is computed by ac-cessing the color reference picture. For this pur-pose, the screen space position of the particle isneeded, for it describes the coordinates that areused to access the reference pictures. The screencoordinates are calculated in the vertex program,

    and subsequently scaled and biased to be in therange that is needed for texture access. In the frag-ment program, these coordinates are used for look-ing up the according color of the stroke. Figure 4illustrates the process.

    The final color of the billboard is then computedby multiplying the brush texture with the referencecolor. If a slight color variation was stored whencreating the particles, it can also be added. It canbe used to create more natural images, since thecolors of a real painting are rarely mixed perfectly.

    3.3.5 Depth test

    Brush textures generally use an alpha channel.Therefor, the particles have to be rendered into theframebuffer using blending. The problem is thatin order to get a correct image, blending needs thebrush strokes to be drawn depth sorted, beginningwith the one that is furthest away from the view-point. Since several thousand strokes have to bedrawn, this is impossible to do in realtime. That iswhy another approach had to be found.

    The solution is to draw only those brush strokeswhich are situated on polygons that are visible in

    the present frame (even though the polygons them-selves are not rendered). This can be accomplishedusing the depth reference picture saved in the firstrendering pass (the content of which is the z-bufferof the first pass). The depth value found in the ref-erence texture is compared to the depth value of thepresent billboard, and only if the billboards depthis smaller or equal to the one found in the referencetexture, it is drawn (as shown in figure 5). Due tothe fact that access to this texture is only granted in

    Figure 5: The manual depth test compares the z-values from the first pass (zold) with the z-values ofeach billboard (znew).

    Figure 6: Two sculptures, rendered in realtime.

    the fragment program, this test has to be repeatedfor every fragment.

    The results of this manual depth test are not asaccurate as those of depth sorting, but if densityand size of the strokes are well balanced, this canhardly be noticed in the final image.

    3.4 Results

    Figure 1 shows a still life which was rendered inrealtime. Approximately 50.000 brush strokes as-

    semble this image, while in preprocessing, nearly500.000 were created. This means that the cameracan be placed very near to the objects before thebrush strokes start to scale. Figures 6 and 7 displaysome other painterly rendered objects.

    On an average computer, up to about 30.000 par-ticles are rendered very smoothly (with about 30frames per second). The framerate decreases withthe number of particles in the scene. Using a Pen-tium4 with 2.4 GHz and a GeForce4 Ti 4600, a

    Page 5

  • 8/3/2019 Realtime Painterly Rendering for Animation

    6/6

    Figure 7: Example for a rendered image showingan apple.

    scene with about 100.000 particles was still ren-dered with more than 8 fps.

    The number of required particles can be reduceda lot if the color reference picture (rendered in thefirst pass) is drawn as a background behind thebrush strokes. In that case, the particle density canbe lower. The background shines through withoutbeing noticed.

    However, the algorithm allows several more opti-mizations. In the prototype, there was, for exam-ple, no culling implemented, which would speed upthe process immensely, especially when an object of

    the scene is observed very closely.Even though the results are very promising, there

    are still some improvements to be made. First ofall, the current prototype only works on nVIDIAgraphics cards (starting with a GeForce3, apartfrom cheaper MX-variants) because of some of theextensions chosen. However, the rendering processshould also work on hardware from ATI if some mi-nor changes are made.

    Another problem can be observed when the sizeof the brush strokes is relatively big. Since thebrush strokes are not depth-sorted, it can happen

    that small details of an object are obscured by ad-jacent strokes. This can be avoided, however, if thebrush size and particle density are chosen carefully.

    The biggest challenge is to reduce the amount ofmemory used by the algorithm. If the painterly ren-derer, as described above, is used for very complexscenes (e. g. the huge environments of modern com-puter games), the number of particles that are cre-ated in the preprocessing step is far too large. A so-lution for this problem would be to create only those

    particles during preprocessing, which are needed forthe first frame of animation. All other particlescould be created in realtime during the rendering(while those not longer needed could be deleted si-multaneously). If it is assured that the camera onlymoves in small steps, i.e. direct jumps from onepoint to another are not possible, the number ofnew particles per frame would be small enough toallow on-the-fly creation.

    References

    [1] Cg ToolkitUsers Manual, Release 1.1.nVIDIA Corporation, first edition, March2003.

    [2] B. Gooch, P.-P. J. Sloan, A. Gooch, P. Shirley,and R. Riesenfeld. Interactive technical illus-tration. In 1999 ACM Symposium on Interac-

    tive 3D Graphics, pages 3138, April 1999.

    [3] P. E. Haeberli. Paint by numbers: Abstractimage representations. In Proceedings of SIG-GRAPH 90, pages 207214, 1990.

    [4] M. Kaplan, B. Gooch, and E. Cohen. In-teractive artistic rendering. In Proceedingsof the first international symposium on Non-

    photorealistic animation and rendering, pages6774. ACM Press, 2000.

    [5] J. Lander. Shades of disney: Opaquing a

    3d world. Game Developers Magazine, March2000.

    [6] L. Markosian, M. A. Kowalski, S. J. Trychin,L. D. Bourdev, D. Goldstein, and J. F. Hughes.Real-time nonphotorealistic rendering. In Pro-ceedings of SIGGRAPH 97, pages 415420,1997.

    [7] B. J. Meier. Painterly rendering for animation.In Proceedings of SIGGRAPH 96, pages 477484, 1996.

    [8] E. Praun, H. Hoppe, M. Webb, and A. Finkel-

    stein. Real-time hatching. In Proceedings ofSIGGRAPH 2001, page 581, 2001.

    [9] C. Wynn. Using p-buffers for off-screen ren-dering in opengl. Technical report, nVIDIACorporation, August 2001.

    [10] C. Wynn. Opengl render-to-texture. Technicalreport, nVIDIA Corporation, March 2002.

    Page 6