Light Using Texture Synthesis for Non-Photorealistic Shading from Paint Samples. Christopher D....

1
ligh t Using Texture Synthesis for Non-Photorealistic Shading from Paint Samples. Christopher D. Kulla, James D. Tucek, Reynold J. Bailey, Cindy M. Grimm Basic Idea Input: Shaded 3D computer generated model. User provided paint sample specifying change from dark to light. Scanned in from traditional art media or created with a 2D paint program. Output: Model rendered in a style similar to that of the paint sample. Texture synthesis is used to generate enough “paint” to cover the model (based on the Image Quilting technique by Effros and Freeman, 2001). Techniques: Image Based Texture Synthesis. View Aligned 3D Texture Projection. View Dependent Interpolation. Previous Work Cartoon Shading, Lake et al. 2000. Technical Illustration , Gooch et al. 1998. The Lit Sphere, Gooch et al. 2001. Color- based techniqu es Hatching, Praun et al. 2001. Stippling, Deussen. 2000. Half-toning, Freudenberg. 2002. Charcoal, Majumder. 2002. Texture- based techniqu es Volume texturing, Webb et al. 2002. Color / Texture combined techniques Stroke-based techniques WYSIWYG NPR, Kalnins et al. 2002. Painterly rendering, Meier. 1996. Paint Processing (to extract information for rendering) Image Based Texture Synthesis View Aligned 3D Texture Projection View Dependent Interpolation Paint samples have two distinct properties: Color transition. Brush texture. Original sample Unsorted (streaky) trajectory Sorted (smooth) trajectory Brush texture Processing steps: Average every pixel column of the original paint sample. This gives an unsorted trajectory. Sort this trajectory to produce smooth trajectory. Subtract smooth sorted trajectory from original sample. This gives the brush texture. User created paint sample Original distribution Extracted trajectory Create user defined “paint samples”: Add an arbitrary color trajectory to extracted brush texture. Numerous paint samples can be created from the original. Increases artistic freedom and control. dar k ligh t Paint is synthesized over the region covered by the model in image space. This region is given by an ID buffer. The shaded model used as a guide. Blocks are placed so that they overlap. A “minimum error cut” is performed between blocks to minimize visual discontinuity. The color component and the texture component are generated separately then added together to produce the final image. dar k Shaded model ID buffer + = Color component Texture component Final image Advantages: Individual frames have high quality. Disadvantages: Slow rendering time. 20 seconds to 1 minute per frame. Due to the texture synthesis step. Animations suffer from “shower door effect”. Results from naively re- synthesizing each frame from scratch. A constraint can be added that requires each block to match the previous frame as much as possible. Increases rendering time. Recent advances in graphics hardware allows for the use of volume (3D) textures. A volume texture is simply a stack of 2D textures. Texture synthesis is done as a preprocessing step. The input sample is divided into 8 regions of roughly constant shade. Image Quilting is used to synthesize larger versions (512 X 512) of each region. Each of the synthesized images is then processed to ensure that it is tileable. This ensures that there are no visible seams when texture repeats over the image. A 3D texture is created by stacking the tileable images in order of increasing shade value. Horizontal and vertical texture coordinates are generated by mapping horizontal and vertical screen coordinates respectively to the interval [0, 511]. Input sample 8 synthesized tileable regions (512 X 512) 3D texture Example rendering Hardware automatically performs blending between the levels of the 3D texture. The third texture coordinate (depth) is generated by mapping the shading values of the model to the interval [0, 7]. Advantages: Almost matches quality of Image Based Texture Synthesis. Runs in real-time. Fair degree of frame-to-frame coherence. Disadvantages: Lengthy preprocessing time. Synthesizing eight 512 X 512 textures and making each tileable may take as long as 15 minutes. Specific textures are assigned to the “important” views of the model. The user specifies which n views are important. Every face in the model must appear in at least one of these views. This ensures that there are no gaps (unpainted regions) in the resulting image. Typically 12 – 15 views are sufficient. Image Quilting is used to generate 2D textures for each of these n views. Assume v is the first view synthesized. Some subset of the faces in v may be present in v+1. The texture associated with these faces is copied over to v + 1 and used as a guide for synthesizing the remaining faces of v+1. This improves frame-to-frame coherence. Texture distortion may arise as a face in v may not necessarily have the same shape or size in v+1 due to the curvature of the model. To render a particular view, weights are assigned to each of the n 2D textures based on how much the viewing direction associated with that texture differs from the current viewing direction. The highest weight is assigned to the texture that most closely matches the current viewing direction. A 3D texture is created by stacking the n 2D textures. These weights are used to blend the textures together to create the final image. Advantages: Runs in real-time. Good frame-to-frame coherence. Disadvantages: Lengthy preprocessing time. Depends on how many views the user specifies as being “important”. 20 seconds to 1 minute for each view. There is some loss of texture quality due to the distortion necessary to fit the curvature of the model.

Transcript of Light Using Texture Synthesis for Non-Photorealistic Shading from Paint Samples. Christopher D....

Page 1: Light Using Texture Synthesis for Non-Photorealistic Shading from Paint Samples. Christopher D. Kulla, James D. Tucek, Reynold J. Bailey, Cindy M. Grimm.

light

Using Texture Synthesis for Non-Photorealistic Shading from Paint Samples.Christopher D. Kulla, James D. Tucek, Reynold J. Bailey, Cindy M. Grimm

Basic Idea

Input:• Shaded 3D computer generated model.

• User provided paint sample specifying change from dark to light. Scanned in from traditional art media or created with a 2D paint program.

Output:• Model rendered in a style similar to that of the paint sample.

Texture synthesis is used to generate enough “paint” to cover the model (based on the Image Quilting technique by Effros and Freeman, 2001).

Techniques:• Image Based Texture Synthesis.

• View Aligned 3D Texture Projection.

• View Dependent Interpolation.

Previous Work

Cartoon Shading,

Lake et al. 2000.

TechnicalIllustration,Gooch et al.

1998.

The Lit Sphere,

Gooch et al. 2001.

Color-based

techniques

Hatching,Praun et al. 2001.

Stippling,Deussen. 2000.

Half-toning,Freudenberg. 2002.

Charcoal, Majumder. 2002.

Texture-based

techniques

Volume texturing,

Webb et al. 2002.

Color / Texture combined techniques Stroke-based techniques

WYSIWYG NPR,Kalnins et al. 2002.

Painterly rendering, Meier. 1996.

Paint Processing (to extract information for rendering)

Image Based Texture Synthesis View Aligned 3D Texture Projection View Dependent Interpolation

Paint samples have two distinct properties:• Color transition.

• Brush texture.

Original sample

Unsorted (streaky) trajectory

Sorted (smooth) trajectory

Brush texture

Processing steps:• Average every pixel column of the original paint sample.

This gives an unsorted trajectory.

• Sort this trajectory to produce smooth trajectory.

• Subtract smooth sorted trajectory from original sample. This gives the brush texture.

User created paint sample

Original distribution

Extracted trajectory Create user defined “paint samples”: • Add an arbitrary color trajectory to extracted brush texture.

• Numerous paint samples can be created from the original. Increases artistic freedom and control.

dark light

Paint is synthesized over the region covered by the model in image space.This region is given by an ID buffer.

The shaded model used as a guide.

Blocks are placed so that they overlap.A “minimum error cut” is performed between blocks to minimize visual discontinuity.

The color component and the texture component are generated separately then added together to produce the final image.

dark

Shaded model ID buffer

+ =

Color component Texture component Final image

Advantages:• Individual frames have high quality.

Disadvantages:• Slow rendering time.

20 seconds to 1 minute per frame.

Due to the texture synthesis step.

• Animations suffer from “shower door effect”.

Results from naively re-synthesizing each frame from scratch.

A constraint can be added that requires each block to match the previous frame as much as possible.

Increases rendering time.

Does not completely eliminate the “shower door effect”.

Recent advances in graphics hardware allows for the use of volume (3D) textures.• A volume texture is simply a stack of 2D textures.

Texture synthesis is done as a preprocessing step.• The input sample is divided into 8 regions of roughly constant shade.

• Image Quilting is used to synthesize larger versions (512 X 512) of each region.

Each of the synthesized images is then processed to ensure that it is tileable.• This ensures that there are no visible seams when texture repeats over the image.

A 3D texture is created by stacking the tileable images in order of increasing shade value.

Horizontal and vertical texture coordinates are generated by mapping horizontal and vertical screen coordinates respectively to the interval [0, 511].

Input sample 8 synthesized tileable regions (512 X 512)

3D texture Example rendering

Hardware automatically performs blending between the levels of the 3D texture.

The third texture coordinate (depth) is generated by mapping the shading values of the model to the interval [0, 7].

Advantages:• Almost matches quality of Image Based Texture Synthesis.

• Runs in real-time.

• Fair degree of frame-to-frame coherence.

Disadvantages:• Lengthy preprocessing time.

Synthesizing eight 512 X 512 textures and making each tileable may take as long as 15 minutes.

Specific textures are assigned to the “important” views of the model.• The user specifies which n views are important.

Every face in the model must appear in at least one of these views. This ensures that there are no gaps (unpainted regions) in the resulting image. Typically 12 – 15 views are sufficient.

Image Quilting is used to generate 2D textures for each of these n views.• Assume v is the first view synthesized.

• Some subset of the faces in v may be present in v+1. The texture associated with these faces is copied over to v + 1 and used as a guide for synthesizing the remaining faces of v+1.

This improves frame-to-frame coherence.

Texture distortion may arise as a face in v may not necessarily have the same shape or size in v+1 due to the curvature of the model.

To render a particular view, weights are assigned to each of the n 2D textures based on how much the viewing direction associated with that texture differs from the current viewing direction.• The highest weight is assigned to the texture that most closely matches the current viewing direction.

A 3D texture is created by stacking the n 2D textures.

These weights are used to blend the textures together to create the final image.

Advantages:• Runs in real-time.

• Good frame-to-frame coherence.

Disadvantages:• Lengthy preprocessing time.

Depends on how many views the user specifies as being “important”.

20 seconds to 1 minute for each view.

• There is some loss of texture quality due to the distortion necessary to fit the curvature of the model.