OIT and Indirect Illumination using DX11 Linked Lists - AMD
Transcript of OIT and Indirect Illumination using DX11 Linked Lists - AMD
OIT and Indirect Illumination using DX11 Linked Lists
Holger Gruen AMD ISV RelationsNicolas Thibieroz AMD ISV Relations
Agenda
Introduction Linked List Rendering Order Independent Transparency Indirect Illumination Q&A
Introduction Direct3D 11 HW opens the door to many
new rendering algorithms In particular per pixel linked lists allow
for a number of new techniquesOIT, Indirect Shadows, Ray Tracing of dynamic scenes, REYES surface dicing, custom AA, Irregular Z-buffering, custom blending, Advanced Depth of Field, etc.
This talk will walk you through:A DX11 implementation of per-pixel linked list and two effects that utilize this techique OIT Indirect Illumination
Per-Pixel Linked Lists with Direct3D 11
ElementLink
ElementLink
ElementLink
ElementLink
Nicolas ThibierozEuropean ISV RelationsAMD
Why Linked Lists? Data structure useful for programming
Very hard to implement efficiently with previous real-time graphics APIs
DX11 allows efficient creation and parsing of linked lists
Per-pixel linked lists A collection of linked lists enumerating all pixels
belonging to the same screen position
ElementLink
ElementLink
ElementLink
ElementLink
Two-step process
1) Linked List Creation Store incoming fragments into linked
lists
2) Rendering from Linked List Linked List traversal and processing of
stored fragments
Creating Per-Pixel Linked Lists
PS5.0 and UAVs Uses a Pixel Shader 5.0 to store
fragments into linked lists Not a Compute Shader 5.0!
Uses atomic operations Two UAV buffers required
- “Fragment & Link” buffer - “Start Offset” buffer
UAV = Unordered Access View
Fragment & Link Buffer The “Fragment & Link” buffer
contains data and link for all fragments to store
Must be large enough to store all fragments
Created with Counter support D3D11_BUFFER_UAV_FLAG_COUNTER flag in UAV view
Declaration:struct FragmentAndLinkBuffer_STRUCT{ FragmentData_STRUCT FragmentData; // Fragment data uint uNext; // Link to next fragment};RWStructuredBuffer <FragmentAndLinkBuffer_STRUCT> FLBuffer;
Start Offset Buffer The “Start Offset” buffer contains
the offset of the last fragment written at every pixel location
Screen-sized:(width * height * sizeof(UINT32) )
Initialized to magic value (e.g. -1) Magic value indicates no more fragments are
stored (i.e. end of the list) Declaration:
RWByteAddressBuffer StartOffsetBuffer;
Linked List Creation (1) No color Render Target bound!
No rendering yet, just storing in L.L. Depth buffer bound if needed
OIT will need it in a few slides UAVs bounds as input/output:
StartOffsetBuffer (R/W) FragmentAndLinkBuffer (W)
-1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1Viewport
Start Offset Buffer
-1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1
Fragment and Link Buffer
Linked List Creation (2a)
Fragment and Link Buffer
Fragment and Link Buffer
Fragment and Link Buffer Counter = 0
-1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1
Start Offset Buffer
-1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1
Fragment and Link Buffer
Linked List Creation (2b)
-1Fragment and Link Buffer
Fragment and Link Buffer
-1
-1
Fragment and Link Buffer Counter = 01
Viewport
-1 -1
-1 -1 -1 -1 -1 -1
-1 0 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1
Start Offset Buffer
-1 -1 -1 -1 -1 -1
-1 -1 -1 -1
-1 -1 -1 -1 -1 -1
Fragment and Link Buffer
Linked List Creation (2c)
Fragment and Link Buffer
Fragment and Link Buffer
-1
Fragment and Link Buffer Counter = 123
-1-1
Viewport
-1
-1 -1 -1 -1 -1 -1
-1 -1 -1 -1
-1 -1 -1 -1 -1 -1
Start Offset Buffer
-1 -1 -1 -1 -1 -1
-1 -1 -1 1 2 -1
-1 -1 -1 -1 -1 -1
Linked List Creation (2d)
Fragment and Link Buffer
Fragment and Link Buffer
-1 -1-1
0
Fragment and Link Buffer Counter = 34
0
5
-1
Viewport
Linked List Creation - Codefloat PS_StoreFragments(PS_INPUT input) : SV_Target{ // Calculate fragment data (color, depth, etc.) FragmentData_STRUCT FragmentData = ComputeFragment();
// Retrieve current pixel count and increase counter uint uPixelCount = FLBuffer.IncrementCounter();
// Exchange offsets in StartOffsetBuffer uint vPos = uint(input.vPos); uint uStartOffsetAddress= 4 * ( (SCREEN_WIDTH*vPos.y) + vPos.x ); uint uOldStartOffset; StartOffsetBuffer.InterlockedExchange(
uStartOffsetAddress, uPixelCount, uOldStartOffset);
// Add new fragment entry in Fragment & Link Buffer FragmentAndLinkBuffer_STRUCT Element; Element.FragmentData = FragmentData; Element.uNext = uOldStartOffset; FLBuffer[uPixelCount] = Element;}
Traversing Per-Pixel Linked Lists
Rendering Pixels (1) “Start Offset” Buffer and “Fragment &
Link” Buffer now bound as SRVBuffer<uint> StartOffsetBufferSRV;StructuredBuffer<FragmentAndLinkBuffer_STRUCT>
FLBufferSRV; Render a fullscreen quad For each pixel, parse the linked list and
retrieve fragments for this screen position Process list of fragments as required
Depends on algorithm e.g. sorting, finding maximum, etc.
SRV = Shader Resource View
Render Target
Start Offset Buffer
Rendering from Linked List
Fragment and Link Buffer
Fragment and Link Buffer
Fragment and Link Buffer
-1 -1 -1
4-1 -1 -1 -1 -1 -1
-1 -1 -1 -1
-1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1
-1 -1 -1 1 2 -1
-1 -1 -1 -1 -1 -1
33
0
0
-1
-1
Rendering Pixels (2)float4 PS_RenderFragments(PS_INPUT input) : SV_Target{ // Calculate UINT-aligned start offset buffer address uint vPos = uint(input.vPos); uint uStartOffsetAddress = SCREEN_WIDTH*vPos.y + vPos.x; // Fetch offset of first fragment for current pixel uint uOffset = StartOffsetBufferSRV.Load(uStartOffsetAddress);
// Parse linked list for all fragments at this position float4 FinalColor=float4(0,0,0,0); while (uOffset!=0xFFFFFFFF) // 0xFFFFFFFF is magic
value { // Retrieve pixel at current offset Element=FLBufferSRV[uOffset]; // Process pixel as required ProcessPixel(Element, FinalColor); // Retrieve next offset uOffset = Element.uNext; } return (FinalColor);}
Order-Independent Transparency via Per-Pixel Linked Lists
Nicolas ThibierozEuropean ISV RelationsAMD
Description Straight application of the linked
list algorithm Stores transparent fragments into
PPLL Rendering phase sorts pixels in a
back-to-front order and blends them manually in a pixel shader Blend mode can be unique per-pixel!
Special case for MSAA support
Linked List Structure Optimize performance by reducing
amount of data to write to/read from UAV E.g. uint instead of float4 for color Example data structure for OIT:struct FragmentAndLinkBuffer_STRUCT{ uint uPixelColor; // Packed pixel color uint uDepth; // Pixel depth uint uNext; // Address of next link};
May also get away with packed color and depth into the same uint! (if same alpha) 16 bits color (565) + 16 bits depth Performance/memory/quality trade-off
Visible Fragments Only! Use [earlydepthstencil] in front of
Linked List creation pixel shader This ensures only transparent fragments
that pass the depth test are stored i.e. Visible fragments!
Allows performance savings and rendering correctness!
[earlydepthstencil]float PS_StoreFragments(PS_INPUT input) : SV_Target{ ...}
Sorting Pixels Sorting in place requires R/W
access to Linked List Sparse memory accesses = slow! Better way is to copy all pixels into
array of temp registers Then do the sorting
Temp array declaration means a hard limit on number of pixel per screen coordinates Required trade-off for performance
Sorting and Blending
Blend fragments back to front in PS Blending algorithm up to app Example: SRCALPHA-INVSRCALPHA Or unique per pixel! (stored in fragment data)
Background passed as input texture Actual HW blending mode disabled
0.87
00.98
-10.95
120.93
340.95 0.93 0.87 0.98
Temp ArrayBackground
color
PS color
Render Target
Storing Pixels for Sorting (...) static uint2 SortedPixels[MAX_SORTED_PIXELS]; // Parse linked list for all pixels at this position // and store them into temp array for later sorting int nNumPixels=0; while (uOffset!=0xFFFFFFFF) { // Retrieve pixel at current offset Element=FLBufferSRV[uOffset]; // Copy pixel data into temp array SortedPixels[nNumPixels++]=
uint2(Element.uPixelColor, Element.uDepth); // Retrieve next offset [flatten]uOffset = (nNumPixels>=MAX_SORTED_PIXELS) ? 0xFFFFFFFF : Element.uNext; }
// Sort pixels in-place SortPixelsInPlace(SortedPixels, nNumPixels); (...)
Pixel Blending in PS (...) // Retrieve current color from background texture float4 vCurrentColor=BackgroundTexture.Load(int3(vPos.xy, 0));
// Rendering pixels using SRCALPHA-INVSRCALPHA blending for (int k=0; k<nNumPixels; k++) { // Retrieve next unblended furthermost pixel float4 vPixColor= UnpackFromUint(SortedPixels[k].x);
// Manual blending between current fragment and previous one vCurrentColor.xyz= lerp(vCurrentColor.xyz, vPixColor.xyz, vPixColor.w); }
// Return manually-blended color return vCurrentColor;}
OIT via Per-Pixel Linked Lists with MSAA Support
Sample Coverage Storing individual samples into Linked
Lists requires a huge amount of memory ... and performance will suffer!
Solution is to store transparent pixels into PPLL as before
But including sample coverage too! Requires as many bits as MSAA mode
Declare SV_COVERAGE in PS structure struct PS_INPUT { float3 vNormal : NORMAL; float2 vTex : TEXCOORD; float4 vPos : SV_POSITION; uint uCoverage : SV_COVERAGE; }
Linked List Structure Almost unchanged from previously Depth is now packed into 24 bits 8 Bits are used to store coveragestruct FragmentAndLinkBuffer_STRUCT{ uint uPixelColor; // Packed pixel color uint uDepthAndCoverage; // Depth + coverage uint uNext; // Address of next link};
Pixel Center
Sample
Sample Coverage Example
Third sample is covered uCoverage = 0x04 (0100 in binary)
Element.uDepthAndCoverage = ( In.vPos.z*(2^24-1) << 8 ) | In.uCoverage;
Rendering Samples (1) Rendering phase needs to be able to write
individual samples Thus PS is run at sample frequency
Can be done by declaring SV_SAMPLEINDEX in input structure
Parse linked list and store pixels into temp array for later sorting Similar to non-MSAA case Difference is to only store sample if coverage
matches sample index being rasterized
static uint2 SortedPixels[MAX_SORTED_PIXELS];
// Parse linked list for all pixels at this position // and store them into temp array for later sorting int nNumPixels=0; while (uOffset!=0xFFFFFFFF) { // Retrieve pixel at current offset Element=FLBufferSRV[uOffset];
// Retrieve pixel coverage from linked list element uint uCoverage=UnpackCoverage(Element.uDepthAndCoverage); if ( uCoverage & (1<<In.uSampleIndex) ) { // Coverage matches current sample so copy pixel SortedPixels[nNumPixels++]=Element; } // Retrieve next offset [flatten]uOffset = (nNumPixels>=MAX_SORTED_PIXELS) ? 0xFFFFFFFF : Element.uNext; }
Rendering Samples (2)
OIT Linked List DemoDEMO
Holger GruenEuropean ISV Relations AMD
Direct3D 11 Indirect Illumination
Indirect Illumination Introduction 1
Real-time Indirect illumination is an active research topic
Numerous approaches existReflective Shadow Maps (RSM) [Dachsbacher/Stammiger05]Splatting Indirect Illumination [Dachsbacher/Stammiger2006]Multi-Res Splatting of Illumination [Wyman2009]Light propagation volumes [Kapalanyan2009]Approximating Dynamic Global Illumination in Image Space [Ritschel2009]
Only a few support indirect shadowsImperfect Shadow Maps [Ritschel/Grosch2008]Micro-Rendering for Scalable, Parallel Final Gathering(SSDO) [Ritschel2010] Cascaded light propagation volumes for real-time indirect illumination [Kapalanyan/Dachsbacher2010]
Most approaches somehow extend to multi-bounce lighting
Indirect Illumination Introduction 2
This section will coverAn efficient and simple DX9-compliant RSM based implementation for smooth one bounce indirect illumination
Indirect shadows are ignored here
A Direct3D 11 technique that traces rays to compute indirect shadows
Part of this technique could generally be used for ray-tracing dynamic scenes
Indirect Illumination w/o Indirect Shadows1. Draw scene g-buffer2. Draw Reflective Shadowmap (RSM)
1. RSM shows the part of the scene that recieves direct light from the light source
3. Draw Indirect Light buffer at ¼ res 1. RSM texels are used as light sources on g-
buffer pixels for indirect lighting 4. Upsample Indirect Light (IL)5. Draw final image adding IL
Step 1
G-Buffer needs to allow reconstruction of World/Camera space position World/Camera space normal Color/ Albedo
DXGI_FORMAT_R32G32B32A32_FLOAT positions may be required for precise ray queries for indirect shadows
Step 2
RSM needs to allow reconstruction of World space position World space normal Color/ Albedo
Only draw emitters of indirect light DXGI_FORMAT_R32G32B32A32_FLOAT
position may be required for ray precise queries for indirect shadows
Step 3 Render a ¼ res IL as a deferred op Transform g-buffer pix to RSM space
->Light Space->project to RSM texel space Use a kernel of RSM texels as light
sources RSM texels also called Virtual Point Light(VPL)
Kernel size depends on Desired speed Desired look of the effect RSM resolution
Computing IL at a G-buf Pixel 1
Sum up contribution of all VPLs in the kernel
Computing IL at a G-buf Pixel 2
g-buffer pixel
RSM texel/VPL
LNpN
pP
LPpL
pL
PPPPD
VPLVPL
pL
LPVPL AreaCol
PPDNsatDNsatonContributi
2
This term is very similar to terms used in radiosity form factor computations
Computing IL at a G-buf Pixel 3
stx : sub RSM texel x position [0.0, 1.0[sty : sub RSM texel y position [0.0, 1.0[
A naive solution for smooth IL needs to consider four VPL kernels with centers at t0, t1, t2 and t3.
Computing IL at a g-buf pixel 4
IndirectLight = (1.0f-sty) * ((1.0f-stx) * + stx * ) + (0.0f+sty) * ((1.0f-stx) * + stx * )
Evaluation of 4 big VPL kernels is slow
stx : sub texel x position [0.0, 1.0[sty : sub texel y position [0.0, 1.0[
VPL kernel at t0VPL kernel at t1
VPL kernel at t2VPL kernel at t3
Computing IL at a g-buf pixel 5
SmoothIndirectLight = (1.0f-sty)*(((1.0f-stx)*(B0+B3)+stx*(B2+B5))+B1)+
(0.0f+sty)*(((1.0f-stx)*(B6+B3)+stx*(B8+B5))+B7)+B4 stx : sub RSM texel x position of g-buf pix [0.0, 1.0[sty : sub RSM texel y position of g-buf pix [0.0, 1.0[
This trick is probably known to some of you already. See backup for a detailed explanation !
Indirect Light Buffer
Step 4 Indirect Light buffer is ¼ res Perform a bilateral upsampling
step See Peter-Pike Sloan, Naga K. Govindaraju, Derek
Nowrouzezahrai, John Snyder. "Image-Based Proxy Accumulation for Real-Time Soft Global Illumination". Pacific Graphics 2007
Result is a full resolution IL
Step 5 Combine
Direct Illumination Indirect Illumination Shadows (not mentioned)
Combined Image
~280 FPS on a HD5970 @ 1280x1024for a 15x15 VPL kernelDeffered IL pass + bilateral upsampling costs ~2.5 ms
DEMO
How to add Indirect Shadows1. Use a CS and the linked lists technique
Insert blockers of IL into 3D grid of lists – let‘s use triangles for now
see backup for alternative data structure
2. Look at a kernel of VPLs again3. Only accumulate light of VPLs that are
occluded by blocker tris Trace rays through 3d grid to detect occluded VPLs Render low res buffer only
4. Subtract blocked indirect light from IL buffer Subtract a blurred version of low res version
Blur is combined bilateral blurring/upsampling
Insert tris into 3D grid of triangle lists
Scene
Rasterize dynamic
blockers to 3D grid
using a CS and
atomics
Insert tris into 3D grid of triangle lists
World space 3D grid of triangle lists
around IL blockers laid out in a UAV
(0,1,0)
eol = End of list (0xffffffff)
Scene
Rasterize dynamic
blockers to 3D grid
using a CS and
atomics
3D Grid Demo
Indirect Light Buffer
Emitter of greenlight
Blocker of green light
Expectedindirect shadow
Blocked Indirect Light
Indirect Light Buffer
Subtracting Blocked IL
Final Image
~300 million rays per second for Indirect Shadows
~70 FPS on a HD5970 @ 1280x1024
DEMO
Ray casting costs ~9 ms
Future directions Speed up IL rendering
Render IL at even lower res Look into multi-res RSMs
Speed up ray-tracing Per pixel array of lists for depth buckets Other data structures
Raytrace other primitive types Splats, fuzzy ellipsoids etc. Proxy geometry or bounding volumes of blockers
Get rid of Interlocked*() ops Just mark grid cells as occupied Lower quality but could work on earlier hardware Need to solve self-occlusion issues
Q&A
Holger Gruen [email protected] Thibieroz
Credits for the basic idea of how to implement PPLL under Direct3D 11 go to Jakub Klarowicz (Techland), Holger Gruen and Nicolas Thibieroz (AMD)