Post on 06-Oct-2020
Computer Graphics (CS 563) Lecture 4: Advanced Computer Graphics
Image‐Based Effects: Part 2
Prof Emmanuel Agu
Computer Science Dept.Worcester Polytechnic Institute (WPI)
Image Processing
Graphics concerned with creating artificial scenes from geometry and shading descriptions
Image processing Input is an image Output is a modified version of input image
Image processing operations include altering images, remove noise, super‐impose images
Image Processing Example: Sobel Filter
Original Image Sobel Filter
Image Processing
Image processing the output of graphics rendering is called post‐processing
To post‐process using GPU, rendered output usually written to offscreen buffer (e.g. color image, z‐depth buffer, etc)
Image in offscreen buffer treated as texture, mapped to screen‐filling quadrilateral
Pixel shader invoked on each element of texture
Image Negative
Another example
Image Distortion
Image Sharpening
Embossing
Toon Rendering
Toon Rendering for Non‐Photorealistic Effects
Blurring
For some operations, texture element may be combined with neighboring texture elements (blurring)
With motion blur Without motion blur
Texture Animation using Image Processing
Use GPU to modify textures from frame to frame Animations such as fluid flow can be done this way Example: simulating rain by Tatarchuk et al
Heat Shimmer
Color Correction
Color correction uses a function to convert colors in an image to some other color
Why color correct? Mimic appearance of a type of film Portray a particular mood Convert from one color space to another Example of conversion from RGB to CIE’s XYZ color space
BGR
ZYX
950227.0119193.0019334.0072169.0715160.0212671.0180423.0357580.0412453.0
Color Correction
Color Correction
High Dynamic Range
Sun’s brightness is about 60,000 lumens Dark areas of earth has brightness of 0 lumens Basically, world around us has range of 0 – 60,000 lumens (High Dynamic Range)
However, monitor has ranges of colors between 0 –255 (Low Dynamic Range)
New file formats have been created for HDR images (wider ranges). (E.g. OpenEXR file format)
High Dynamic Range Some scenes contain very bright + very dark areas Using uniform scaling factor to map actual intensity to displayed pixel intensity means: Either some areas are unexposed, or Some areas of picture are overexposed
Under exposure Over exposure
Tone Mapping
Process of scaling intensities in real world images (e.g HDR images) to fit in displayable range
Try to capture feeling of real scene: non‐trivial Example: If coming out of dark tunnel, lights should seem bright
Types of Tone Mapping Operators
Global: Use same scaling factor for all pixels Local: Use different scaling factor for different parts of image
Time‐dependent: Scaling factor changes over time Time independent: Scaling factor does NOT change over time
Real‐time rendering usually does NOT implement local operators due to their complexity
Tone Mapping Operators
Simple (Global) Tone Mapping Methods
Tone Mapping
If range of input values is small, compute average then scale so that average in displayable range
Simple average may cause a few large values to dominate
Reinhard suggested to use logarithm instead when summing pixel values
is the log‐average luminance, avoids log of 0 is the luminance at pixel (x,y)
)),(log(1exp
,yxL
NL w
yxw
wL
),( yxLw
Tone Mapping
Once log‐average luminance is computed, can then define tone mapping operator
is resulting luminance a parameter is key of the scene (a = 0.18 is normal) High key minimizes contrasts and shadows. E.g. a = 0.72 Low key maximizes contrasts between light and dark. E.g. a
= 0.045
),(),( yxLLayxL ww
),( yxL
Tone Mapping: Effects of a
Lens Flare and Bloom
Caused by lens of eye/camera when directed at light Halo – refraction of light by lens Ciliary Corona – Density fluctuations of lens Bloom – Scattering in lens, glow around light
Halo, Bloom, Ciliary Corona – top to bottom
Lens Flare and Bloom
Use set of textures for glare effects Each texture is bill boarded Alpha map – how much to blend Can be given colors for corona Overlap all of them ! Animate – create sparkle
Depth of Field In photographs, a range of pixels in focus Pixels outside this range are out of focus This effect is known as Depth of field
Depth of Field using Accumulation Buffer Jitter view position, add weighted samples to accumulation buffer
After multiple rendering passes, display picture Downside: Multiple rendering passes is expensive
Depth of Field using Scattering
Scatter shading value of each location on a surface to neighboring pixel
Sprites used to represent circles of influence Pixel value is averaged sum of all overlapping circles
Motion Blur Antialiasing is spatial blurring In cameras, caused by exposing film to moving objects Motion blur: Blurring of samples taken over time Makes fast moving scenes appear less jerky 30 fps + motion blur better than 60 fps + no motion blur
Motion Blur Accumulation buffer can be used to create blur Basic idea is to average series of images over time Move object to set of positions occupied in a frame, blend resulting images together
Motion Blur
Can blur moving average of frames. E.g blur 8 images When you render frame 9, subtract frame 1, etc Velocity buffer: blur in screen space using velocity of objects
Fog Fog was part of OpenGL fixed function pipeline Using shaders, fog applied to scene just before display
Shaders can generate more elaborate fog Fog is atmospheric effect A little better realism Help in determining distances
Fog example
Often just a matter of Choosing fog color Choosing fog model Turning it on
Rendering Fog
Color of fog: color of surface: fc sc
]1,0[ )1( fff sfp ccc
How to compute f ? 3 ways: linear, exponential, exponential-squared Linear:
startend
pend
zzzz
f
Fog
Exponential Squared exponential Exponential derived from Beer’s law Beer’s law: intensity of outgoing light diminishes
exponentially with distance
pf zdef 2)( pf zdef
Fog
f values for different depths can be pre‐computed and stored in a table on GPU
Distances used in f calculations are planar Can also use Euclidean distance from viewer or radial distance to create radial fog
Different Atmospheres
More generally, we can simulate better skies
Volume Rendering Volumetric data is represented as volumetric pixels (voxels) Rendering Voxels (CT/MRI) Methods Implicit surface techniques to convert voxel samples into
polygonal surfaces (called isosurfaces)
Voxel Data as set of 2D image slices (Lacroute & Levoy)
Splatting – Voxel represented by alpha blended circular object (splat) , that drops of in opacity at fringes
Volume slices as textured Quads (OpenGL Volumizer API)
Volumetric Texturing
Represent objects as sequence of semi‐transparent textures
Good for rendering fuzzy or hairy objects
References Kutulakos K, CSC 2530H: Visual Modeling, course slides
UIUC CS 319, Advanced Computer Graphics Course slides
David Luebke, CS 446, U. of Virginia, slides Chapter 2 of RT Rendering Suman Nadella, CS 563 slides, Spring 2005