Shadow Mapping Chun-Fa Chang National Taiwan Normal University.

Post on 13-Jan-2016

216 views 1 download

Tags:

Transcript of Shadow Mapping Chun-Fa Chang National Taiwan Normal University.

Shadow MappingChun-Fa Chang

National Taiwan Normal University

Advanced Texture Mapping Using multiple textures Multi-Pass textures

1st Pass: render the scenes as usual Create textures from the output images 2nd Pass: render the scenes again using the created

texture

Using Textures in GLSL Shader sampler2D data type in GLSL Binding to the C/C++ program through glGetUniformLocation()

See the myTexture variable in Lab 7 in both the fragment shader and, the C code setShaders().

Shadow Map Using two textures: color and depth Relatively straightforward design using pixel

(fragment) shaders on GPUs.

Image Source: Cass Everitt et al., “Hardware Shadow Mapping” NVIDIA SDK White Paper

Eye’s View Light’s View Depth/Shadow Map

Basic Steps of Shadow Maps1. Render the scene from the light’s point of

view,

2. Use the light’s depth buffer as a texture (shadow map),

3. Projectively texture the shadow map onto the scene, Use “TexGen” or shader

4. Use “texture color” (comparison result) in fragment shading.

What’re in the Example Code? A C++ class for storing matrix state:

class OpenGL_Matrix_State {void Save_Matrix_State();void Restore_Matrix_State();void Set_Texture_Matrix();

}

A proxy rectangle for debug

(1) Rendering from Light’s View Set the camera to the light position. Viewport set to the same size as the texture. To avoid the floating point precision problem

(casting its own shadow to a surface) , depth must be shifted: glPolygonOffset(..., ...); glEnable(GL_POLYGON_OFFSET_FILL);

Shading could be turned off We only care about the depth!

(2) Creation of Shadow Map (Texture) Draw the objects (from light’s view) To create a depth texture, use:

glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, shadowMapSize, shadowMapSize, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0);

Then use glCopyTexSubImage2D() to copy the frame buffer to the depth texture.

(3) Generation of Texture Coordinates When we render the scene again from the

normal camera view: We store the light’s view to the texture matrix. The texture matrix is then passed to the GLSL

shaders. gl_TextureMatrix[0] * vertex gives us the

homogeneous coordinates in light space Divide by w to obtain the texture coordinates. Watch out! Must shift from [-1, 1] to [0,1]

Normalized Coordinates Independent of the screen resolution or window size. Clip coordinates: after Model-View and Projection

transformation. Normalized Device Coordinates (NDC): after division by w.

(4) Depth Comparison in Fragment Shader

Compare two depths: Depth read from the shadow map Depth by transformation to the light space

In the shadow if ____?_(your exercise)____ Set a darker color for shadowed surfaces

More GPU Programming and GPGPU

Chun-Fa Chang

National Taiwan Normal University

Calculator vs. Computer What is the difference between

a calculator and a computer? Doesn’t a compute-r just

“compute”? The Casio fx3600p calculated

can be programmed (38 steps allowed).

Turing Machine Can be adapted to simulates the logic of any

computer that could possibly be constructed. von Neumann architecture implements a

universal Turing machine. Look them up at Wikipedia!

Simplified View

The Data Flow:3D Polygons (+Colors, Lights, Normals, Texture

Coordinates…etc.) 2D Polygons 2D Pixels (I.e., Output Images)

Transform(& Lighting)

Rasterization

Global Effects

translucent surface

shadow

multiple reflection

Local vs. Global

How Does GPU Draw This?

Quiz

Q1: A straightforward GPU pipeline give us local illumination only. Why?

Q2: What typical effects are missing?

Hint: How is an object drawn? Do they consider the relationship with other objects?

Shadow, reflection, and refraction…

Wait but I’ve seen shadow and reflection in games before…

With Shadows Without Shadows

Faked Global Illumination Shadow, Reflection, BRDF…etc. In theory, real global illumination is not

possible in current graphics pipeline: Conceptually a loop of individual polygons. No interaction between polygons.

Can this be changed by multi-pass rendering?

Case Study: Shadow Map Using two textures: color and depth Relatively straightforward design using pixel

(fragment) shaders on GPUs.

Adding “Memory” to the GPU Computation Modern GPUs allow:

The usage of multiple textures. Rendering algorithms that use multiple passes.

Transform(& Lighting)

Rasterization

Textures