Spatially Varying IBL Using Light Probe...

60
Department of Science and Technology Institutionen för teknik och naturvetenskap Linköping University Linköpings Universitet SE-601 74 Norrköping, Sweden 601 74 Norrköping LiU-ITN-TEK-A--09/011--SE Spatially Varying IBL Using Light Probe Sequences Richard Khoury 2009-02-13

Transcript of Spatially Varying IBL Using Light Probe...

Page 1: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

Department of Science and Technology Institutionen för teknik och naturvetenskap Linköping University Linköpings Universitet SE-601 74 Norrköping, Sweden 601 74 Norrköping

LiU-ITN-TEK-A--09/011--SE

Spatially Varying IBL UsingLight Probe Sequences

Richard Khoury

2009-02-13

Page 2: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

LiU-ITN-TEK-A--09/011--SE

Spatially Varying IBL UsingLight Probe Sequences

Examensarbete utfört i medieteknikvid Tekniska Högskolan vid

Linköpings universitet

Richard Khoury

Handledare Jonas UngerExaminator Stefan Gustavson

Norrköping 2009-02-13

Page 3: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –under en längre tid från publiceringsdatum under förutsättning att inga extra-ordinära omständigheter uppstår.

Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner,skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat förickekommersiell forskning och för undervisning. Överföring av upphovsrättenvid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning avdokumentet kräver upphovsmannens medgivande. För att garantera äktheten,säkerheten och tillgängligheten finns det lösningar av teknisk och administrativart.

Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman iden omfattning som god sed kräver vid användning av dokumentet på ovanbeskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådanform eller i sådant sammanhang som är kränkande för upphovsmannens litteräraeller konstnärliga anseende eller egenart.

För ytterligare information om Linköping University Electronic Press seförlagets hemsida http://www.ep.liu.se/

Copyright

The publishers will keep this document online on the Internet - or its possiblereplacement - for a considerable time from the date of publication barringexceptional circumstances.

The online availability of the document implies a permanent permission foranyone to read, to download, to print out single copies for your own use and touse it unchanged for any non-commercial research and educational purpose.Subsequent transfers of copyright cannot revoke this permission. All other usesof the document are conditional on the consent of the copyright owner. Thepublisher has taken technical and administrative measures to assure authenticity,security and accessibility.

According to intellectual property law the author has the right to bementioned when his/her work is accessed as described above and to be protectedagainst infringement.

For additional information about the Linköping University Electronic Pressand its procedures for publication and for assurance of document integrity,please refer to its WWW home page: http://www.ep.liu.se/

© Richard Khoury

Page 4: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

AbstractVisual media, such as film and computer games, often require the need for therealistic rendering of synthetic objects. Image Based Lighting (IBL) techniquesprovide methods for applying measured real-world lighting to synthetic objects,making them appear believable within their environment. Given this ability, IBLtechniques have drawn interest within many industries involved in visual effects,however its adoption has been mostly confined to implementations of its originalmethod.

Traditional IBL, as it is now known, only requires the measurement of lightat one position in space to provide the data for illuminating all synthetic objectsin the scene. This single requirement places large constraints on the complexityof illumination within a scene by assuming there are negligible changes in lightingwithin the extent of the local environment. Due to this, lighting features such asshadows, that exhibit a spatial frequency greater than zero, cannot be representedwithin this limited model.

Modern research into IBL techniques aim to resolve this problem by presentingmethods to capture, process, and render spatially varying illumination. This thesisbuilds upon recent research into densely sampled light probe sequences and con-siders its use in a production environment. Its objective is to present set of toolsfor processing and rendering this data for use with the commercial software pack-ages Maya, a modelling and animation application, and mental ray, a high-fidelityrenderer.

1

Page 5: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap
Page 6: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

Acknowledgments

I would like to thank everyone in the VITA department at Linköping Universityfor providing an amazing environment for graphical inspiration. The pleasure Ihave had studying within this department has made me wish I could study thereforever.

A special thanks goes to those whom I worked closely with during my the-sis term, especially Stefan Gustavason for his amazing insights and breadth ofknowledge, Jonas Unger for his technical genius, and Per Larsson for his practicalknow-how.

And finally, I’d like to thank my lovely fiancé, Alison, who has been the mostremarkable support during my two years in Sweden. I ♥ U

3

Page 7: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap
Page 8: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

Contents

1 Introduction 3

2 Background 52.1 The Rendering Equation . . . . . . . . . . . . . . . . . . . . . . . . 52.2 Traditional Image Based Lighting . . . . . . . . . . . . . . . . . . . 7

2.2.1 The General Method . . . . . . . . . . . . . . . . . . . . . . 72.2.2 High Dynamic Range Imaging . . . . . . . . . . . . . . . . 82.2.3 Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.2.4 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.3 Modern Image Based Lighting techniques . . . . . . . . . . . . . . 102.3.1 Improving Capture Time . . . . . . . . . . . . . . . . . . . 102.3.2 Spatially Varying Data . . . . . . . . . . . . . . . . . . . . . 112.3.3 Improving Rendering Efficiency . . . . . . . . . . . . . . . . 14

3 Light Probe Sequences 153.1 Off-line Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3.1.1 Traditional IBL . . . . . . . . . . . . . . . . . . . . . . . . . 153.1.2 The Plenoptic Function . . . . . . . . . . . . . . . . . . . . 163.1.3 Nearest Neighbour Sampling . . . . . . . . . . . . . . . . . 173.1.4 Ray Projection Sampling . . . . . . . . . . . . . . . . . . . 183.1.5 Single-Viewpoint Reprojection . . . . . . . . . . . . . . . . 20

3.2 Real-time Diffuse Rendering . . . . . . . . . . . . . . . . . . . . . . 203.2.1 Spherical Harmonics . . . . . . . . . . . . . . . . . . . . . . 213.2.2 Down-Sampling . . . . . . . . . . . . . . . . . . . . . . . . . 27

4 Implementation and Usage 314.1 Ray Projection Algorithm . . . . . . . . . . . . . . . . . . . . . . . 314.2 Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

4.2.1 Spherical Harmonic Representation . . . . . . . . . . . . . . 324.2.2 Down-Sampled Representation . . . . . . . . . . . . . . . . 32

4.3 Maya Hardware Shader . . . . . . . . . . . . . . . . . . . . . . . . 344.3.1 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

4.4 Mental ray Shaders . . . . . . . . . . . . . . . . . . . . . . . . . . . 374.4.1 Environment Shader . . . . . . . . . . . . . . . . . . . . . . 374.4.2 Area Light Shader . . . . . . . . . . . . . . . . . . . . . . . 38

5

Page 9: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

6 Contents

4.4.3 Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

5 Conclusion 415.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

A Remapping equations 45

B GLSL Code 47B.1 Spherical Harmonic Rendering for Light Probe Sequences . . . . . 47

B.1.1 Vertex Shader . . . . . . . . . . . . . . . . . . . . . . . . . . 47B.1.2 Fragment Shader . . . . . . . . . . . . . . . . . . . . . . . . 47

B.2 Down-sample Rendering for Light Probe Sequences . . . . . . . . . 49B.2.1 Vertex Shader . . . . . . . . . . . . . . . . . . . . . . . . . . 49B.2.2 Fragment Shader . . . . . . . . . . . . . . . . . . . . . . . . 49

C Ray Projection Equation 51

Page 10: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

List of Figures1.1 Scene composed of real and synthetic objects. The corresponding

light probe image, bottom left, shows the radiance information usedto illuminate the synthetic objects. Images taken from [1]. . . . . . 3

2.1 Ray-tracing. A ray is cast from the eye/camera, through a pixel-plane (the final image), and into the scene. The ray proceeds tobounce around the scene and returns the accumulation of colours/il-lumination found. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.2 Debevec’s general method. Image taken from [1]. . . . . . . . . . . 72.3 The directions observed on a mirrored sphere from a parallel viewing

plane. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.4 Photos of a mirror sphere taken at various exposures for generating

a HDR radiance map. Taken from [1]. . . . . . . . . . . . . . . . . 92.5 The Real Time Light Probe used in [2, 3, 4] . . . . . . . . . . . . . 112.6 Two capture devices presented by Unger et al. [5]. . . . . . . . . . 122.7 1D spatial variance rendered through IBL techniques discussed fur-

ther in this thesis, and originally presented by Unger et al. [3] . . . 132.8 3D spatial variance rendered through IBL techniques discussed by

Unger et al. [4] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

3.1 Traditional IBL applied to a scene using two neighbouring lightprobe samples. Image taken from [3]. . . . . . . . . . . . . . . . . . 16

3.2 Nearest neighbour sampling with a 1D light probe sequence. Thepoint p is projected to the nearest point on the sample path. Thenearest sample(s) are then used as normal radiance maps, indexedwith the direction d. . . . . . . . . . . . . . . . . . . . . . . . . . . 17

3.3 A rendering using nearest neighbour sampling with a 1D light probesequence. The vertical light bands are projected at an oblique angleto the sample path, though this method makes it appear as thoughthey are orthogonal. This image also shows the noise that is gen-erated by undersampling the environment maps, even though over16000 samples were chosen per environment map and some hourswere taken for it to render. . . . . . . . . . . . . . . . . . . . . . . 18

3.4 Ray projection sampling with a 1D light probe sequence. The di-rection d is projected to the sample path from point p, indicatingthe better samples to choose. . . . . . . . . . . . . . . . . . . . . . 19

3.5 Ray projection sampling with a 1D light probe sequence. Thismethod displays better results for points diverging from the samplepath. This image also shows the noise that is generated by under-sampling the environment maps, even though over 16000 sampleswere chosen per enviroment map and some hours were taken for itto render. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

Page 11: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

2 Contents

3.6 The single-viewpoint reprojection method. z0 (green) is the positionof the mirror sphere at each frame, while the red points, z, are theprojection points of each incident ray back to the sample path. Ris the radius of the mirror sphere and r is the radial distance fromthe sample path to the ray’s intersection point. Image taken from [3]. 20

3.7 A scene lit bit a grill-covered spot light. The diffuse materials inthe bottom row reveal more about the spatially varying light thanthe specular materials in the top row. . . . . . . . . . . . . . . . . 21

3.8 The first four bands of the spherical harmonic basis functions. Thecolour green indicates positive values, while red indicates negative.This image was taken from [6]. . . . . . . . . . . . . . . . . . . . . 23

3.9 An example of function projection (decomposition into basis func-tion coefficients) and the subsequent reconstruction. This imagewas taken from [7]. . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3.10 Real-time rendering of a 1D light probe sequence using the sphericalharmonic method. The same parallax errors occur as the nearestneighbour method presented in Section 3.1.3. . . . . . . . . . . . . 26

3.11 Rendering of a sphere showing the difference in regions between thespherical sampling schemes. The positive directions for the x(red),y(green), and z(blue) axes are shown. The green dots representthe primary sample direction for that region. Figures 3.11(a) and3.11(b) show the non-uniform regions produced by uniform sam-pling in polar coordinate space. Figures 3.11(c) and 3.11(d) showthe more uniform regions produced by using the vertices of an Icosa-hedron and its first subdivision respectively. . . . . . . . . . . . . . 28

3.12 Real-time rendering of a 1D light probe sequence using the down-sampling method. The same four samples sets shown in Figure 3.11are used. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

4.1 The binary format of the spherical harmonic data processed fromthe light probe sequence, presented in terms of its use as a textureon the GPU. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

4.2 The binary format of the down-sampled data processed from thelight probe sequence, presented in terms of its use as textures onthe GPU. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4.3 The hardware shader attribute editor. . . . . . . . . . . . . . . . . 364.4 The hardware shader attribute connections. . . . . . . . . . . . . . 364.5 A screenshot of the Maya hardware shader in action. . . . . . . . . 374.6 Two scenes showing the scene rendered improperly when shadow

tracing was enabled. . . . . . . . . . . . . . . . . . . . . . . . . . . 384.7 The skewing of spatial variance. This is a top-view of the xz-plane,

the sample path (z-axis) located toward the middle of the image,as seen in 4.7(b). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

A.1 The coordinate system in use. . . . . . . . . . . . . . . . . . . . . . 45

Page 12: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

Chapter 1

Introduction

The lighting within a scene is often the most important step in making computergraphics appear believable. In media applications where the aim is the combineboth real and computer graphics content, the need for consistent and believablelighting is even more important as our visual perception is finely tuned to real-world phenomena and any irregularities are easily noticed.

Figure 1.1. Scene composed of real and synthetic objects. The corresponding lightprobe image, bottom left, shows the radiance information used to illuminate the syntheticobjects. Images taken from [1].

Modern techniques for rendering synthetic images mainly rely on the creationof synthetic light sources within the scene to provide the primary means of illumi-nation. While this can make purely synthetic scenes appear realistic, it often failsto adequately reproduce all the nuances of illumination within a real-world scene.This downfall becomes more obvious when real and computer generated contentare composited together, a process used regularly in film and television, architec-tural renderings, augmented reality applications, and still media such as productbrochures. This broad industrial demand, and the desire for more photo-realisticimages, has largely driven the research in the field of Image Based Lighting (IBL);

3

Page 13: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

4 Introduction

a technique that relies on the measurement of real-world lighting (through imagecapture) to provide the illumination for a synthetic scene. An example of this canbe seen in Figure 1.1 where the objects in the center of the image are all computergenerated and illuminated by a single high dynamic range (HDR) image called alight a probe.

Though IBL techniques have been around for some time, they have still notmanaged to pervade the modern production pipeline. Many commercial softwarepackages include the ability to perform traditional IBL (discussed in the followingchapter), but the more modern research still remains as proof-of-concept softwarewritten mostly for research purposes.

This thesis will discuss some of the more modern research into IBL techniques,focusing specifically on research into 1-dimensional, densely sampled light probesequences. It aims to present a set of tools for working with this data using twoindustry-level software packages: Maya and mental ray.

The following chapter will present a background into IBL techniques and touchon the more modern research into this field.

Chapter 3 will present, in more detail, the techniques for working with sequen-tial light probe data in both high-fidelity and real-time rendering applications.

Following from this, chapter 4 will detail the development of the tools, andhow the relevant techniques are implemented within the Maya and mental rayframeworks.

Finally, chapter 5 will summarise the techniques and present an overview ofhow these tools can improve in future developments.

Page 14: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

Chapter 2

Background

Rendering synthetic objects in a realistic way is a non-trivial task. The illumi-nation within the scene can often be quite complex, giving rise to many methodsthat attempt to solve its interaction with the environment. All these techniquesprovide some form of approximation of the illumination within a scene, with eachhaving their own benefits and drawbacks. At the root of all of these methodsis a mathematical model of light within the scene, for which modern renderingtechniques are able to better approximate with extremely realistic results.

This chapter aims to introduce the reader to the problems faced by realisticrendering methods, and the direction IBL techniques take in attempting to solvethem.

2.1 The Rendering Equation

Rendering methods such as ray-tracing and radiosity attempting to simulate thephysical characteristics of light energy (radiant flux) within a closed system; thescene. These global illumination methods are used for high fidelity graphics as theybetter approximate the system of light energy that exists within a scene. Kajiya [8]were the first to describe the generalised form for calculating this system, referredto as the rendering equation:

B(x, ~ωo) = Lε(x, ~ωo) +∫

Ωh(~n)

L(x, ~ωi)ρ(x, ~ωi → ~ωo)( ~ωi · ~n)d~ωi (2.1)

Where:

5

Page 15: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

6 Background

B(x, ~ωo) is the radiance at x in the outgoing direction ~ωo,L(x, ~ωi) is the radiance at x from the incoming direction ~ωi,Lε(x, ~ωo) is the self emission at x in the outgoing direction ~ωo,

Ωh(~n) is the hemisphere centered on the normal ~n at point x,ρ(x, ~ωi → ~ωo) is the SBRDF (Spatially varying Bi-directional Reflectance Dis-

tribution Function) at the point x,( ~ωi · ~n) is the cosine weighting of the radiance based on the angle between

the incoming direction and the normal.This equation describes the total exitance radiance along a ray originating from

a point, x, in the direction ~ωo, as an emission component plus the weighted sumof all radiance entering that point from the hemisphere that exits in the sameoutgoing direction. Solving this equation for each point on all surfaces becomesa recursive problem that cannot be analytically solved by a computer, so cleverapproximations must be applied to achieve a desirable result in a more practicaltime-frame (see [9] and [10]). The core difficulty in determining correct illumi-nation is knowing how light is interacting between all objects in the scene. Inpractice, this is generally done by casting rays through the viewing plane, findingwhich object it intersects, and determining how that point of intersection is illumi-nated by the whole scene by casting more secondary rays. Figure 2.1 demonstratesthe ray-tracing method.

Figure 2.1. Ray-tracing. A ray is cast from the eye/camera, through a pixel-plane(the final image), and into the scene. The ray proceeds to bounce around the scene andreturns the accumulation of colours/illumination found.

This algorithm has the potential to be extremely slow in order to produce anadequate image. For applications that require a virtual object be placed in a realscene, this problem becomes even harder to solve by traditional means. A naïve

Page 16: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

2.2 Traditional Image Based Lighting 7

solution using the methods described above would require the artist to produceaccurate models for the materials, lighting, and geometry within the scene in orderfor the virtual objects to render correctly. But given a few constraints on the scenethis can task can become a lot easier to solve.

2.2 Traditional Image Based LightingIn 1998, Debevec [1] described a simple method for effectively rendering syntheticobjects into a real scene. This paper extended upon ideas presented five yearsearlier by Fournier et al. [11] in which a synthetic object could be rendered intoa real scene given various assumptions on the scene’s geometry and the commonviewing parameters. Whilst the approach by Fournier et al. did not produce aseamless composition, they did show it was possible to render virtual objects withina real environment. The method given by Debevec, however, did not require thesame assumptions that Fournier et al. imposed, and still produced visually pleasingresults (see Figure 1.1).

2.2.1 The General MethodIn order for traditional image based lighting to work, the scene must satisfy thegeneral method described by Debevec in Figure 2.2. The most important notionwithin this diagram is that the distant scene is considered as a component that isaffecting all objects within the local scene, but is not in turn being re-affected byany of those objects. This constraint is the basis for how the method works.

Figure 2.2. Debevec’s general method. Image taken from [1].

Given this scene constraint, it becomes possible to record the incident lightat a point within the local scene and use it as a global illumination measurement

Page 17: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

8 Background

for illuminating all virtual objects close to that point. The tool Debevec used tocapture the illumination from almost all directions is called a light probe and isideally a completely reflective sphere. The probe represents all incident light ontoa point in the local scene corresponding to the sphere’s centre point1. Figure 2.3shows the principle of a light probe, and how the perfect observer2 can see almostall of the scene from the range of directions offered by the reflections in the sphere,except those parts for which the sphere directly occludes.

Dead Zone

Figure 2.3. The directions observed on a mirrored sphere from a parallel viewing plane.

What Figure 2.3 also attempts to show is the non-linearity of the viewing anglesas the viewing rays draw further away from the centre of the sphere. This can beeasily observed between the blue and the red rays where the same small verticaldifference from the view plane spans a larger angular range in the light probe.Because of this, care must be taken to maximise the resolution and quality of thelight probe capture so that sufficient data exists towards the sphere’s edge.

2.2.2 High Dynamic Range ImagingSince a light probe image captures the radiant flux from the global scene, thelighting information must be in a format that represents radiance values. Lowdynamic range (LDR) images, as captured by commodity digital or film-basedcameras, cannot measure all intensities of light that the scene contains within oneexposure. This can be seen in over or under saturated images where light intensityinformation is clamped to values that the camera sensor or film can handle.

1This is a false assumption as mentioned in [3], but is a sufficient representation for traditionalmethods.

2The perfect observer is infinitely far away from the sphere such that all view rays are parallel.This would never happen in reality but allows the maths and conceptual understanding to remainmore straight-forward.

Page 18: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

2.2 Traditional Image Based Lighting 9

Debevec and Malik [12] describe a technique for recovering high dynamic range(HDR) radiance values from a set of LDR images. These LDR images must betaken with varying exposures so that each pixel is properly saturated in at leastone of the images in the set. A knowledge of the film or sensor’s response to light,known as the response curve, and the exact exposure of each image is then used togenerate a HDR image containing radiance values. Figure 2.4 shows a light probecaptured with three different exposures, allowing the extraction of all radiancevalues including the low and high intensity areas within the scene.

Figure 2.4. Photos of a mirror sphere taken at various exposures for generating a HDRradiance map. Taken from [1].

2.2.3 RenderingNow that a HDR radiance map is obtained it can be used as an environmentmap for rendering an object within the local scene. Each pixel that covers themirror sphere corresponds to the flux travelling along a particular ray direction.So for each desired direction we wish to calculate the light from, we can index intothe radiance map at a particular pixel and retrieve the information there. Theequations for mapping between image coordinates of a light probe and variousother coordinate systems are presented in Appendix A.

Rendering mirror-like materials realistically becomes as simple as doing onelookup per intersection into the radiance map; much easier than computing the fullglobal illumination within the scene. Diffuse materials are a little more intensiveas the hemisphere around the surface normal of the object must be sampled. Toimprove the performance of rendering diffuse materials a diffuse convolution mapcan be preprocessed to allow very quick mapping from surface normal to a diffusevalue. The same can be done for various other material types such as glossysurfaces. More advanced techniques for rendering diffuse materials are discussedin Section 3.2.

For a local scene that contains multiple virtual objects, a further global illu-mination step must be used to calculate object to object interactions for whichtraditional IBL cannot directly solve. Whilst the speed of rendering is now hin-dered by a global illumination step, the larger benefits of the light probe methodare still not lost. Extensions for improving global illumination performance are

Page 19: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

10 Background

discussed later in this chapter.

2.2.4 LimitationsTwo important limitations to traditional IBL are discussed below. While they arenot the only ones that exist, they represent the most important factors for whichthe modern research this thesis is based on has chosen to solve.

Static scene The HDR reconstruction from LDR images places a lot of impor-tance on the scene being static so that no artefacts like ghosting will bepresent in the resulting radiance map. This problem is not limited to lightprobe capture, but rather all forms of HDR reconstruction using LDR imagesets.

Spatial variance For most scenes there is going to be some spatial variance whichone light probe alone cannot represent. If the object is expected to movewithin the scene or is static but on the border of spatially varying phenomenasuch as shadows, then multiple light probes must be taken to allow the virtualobject to render correctly within the scene.

2.3 Modern Image Based Lighting techniquesIn an environment such as a film set there is often a very limited time given tovisual effects crew wishing to capture the lighting information necessary for post-production. Currently there is no commercially available device to convenientlycapture many light field samples within a scene. Firstly, the camera’s position andorientation must be measured or made able to be tracked, normally by placingmarkers within the scene and using image-based tracking. The measuring andsetup must be accurate and thus it requires a reasonable amount of time; not verywell suited for a time pressured environment. Once the scene has been preparedthe capture can take place. The acquisition is often done using commodity LDRdigital cameras, so the many photos required to make one HDR radiance map takesome time to capture. This reasserts the requirement that the scene remain staticwhile the varying exposures are acquired.

In the scenario above it is easy to see that, given the current methods, notmany light field samples can be acquired within the potentially short time-frameallocated. Solutions for improving capture time and complexity are necessary inorder for better image-based lighting methods to fit inconspicuously into the filmset environment.

2.3.1 Improving Capture TimeThe static scene limitation is soon becoming a thing of the past as more ad-vanced camera technology is allowing the direct capture of HDR images, even atvideo frame rates. Beyond solving the static scene limitation, a fast capture de-vice becomes necessary for the acquisition of large data sets representing variousillumination properties, such as high frequency spatial variations.

Page 20: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

2.3 Modern Image Based Lighting techniques 11

Figure 2.5. The Real Time Light Probe used in [2, 3, 4]

Unger et al. [2, 3, 4] presents the design and use of a Real Time Light Probe, acustom three-camera rig that quickly captures HDR light probe sequences. Eachcamera captures monochrome HDR video at 512x512 resolution, 25fps, and at aneffective dynamic range of 10,000,000:1. As shown in Figure 2.5, red, green, andblue colour filters are attached to the lenses, and each unit is aimed toward thecentre of the light probe residing at the opposite end of the rig. After capture, thethree colour channels must be aligned since they both have different views of themirror sphere.

2.3.2 Spatially Varying DataCapturing spatially varying data is another point of interest to those wishing torender synthetic objects within a real scene. Most real world scenes contain lightthat varies in the spatial domain, sometimes quite rapidly, and it is important tohave the tools to capture these changes. In 2003, Unger et al. [5] experimentedwith two ways to capture incident light fields (ILF) on a plane: firstly using amirror-sphere/light-probe array (2.6(a)), and secondly using a high-fidelity cap-turing device (2.6(b)).

Whilst mirror spheres are often a great way to capture a light field sample, themirror sphere array suffers from two main problems. The first issue is resolution.Since the whole array is captured using one image, each light field sample onlyconstitutes a smaller fraction of that image meaning that the directional resolutionis quite limited. The second issue is the interreflection between adjacent sphereson the plane. This means that the parts of the captured data within each lightfield sample that are not directly reflecting the environment need to be discounted.Fortunately for their purposes, the unoccluded field of view of 159.2 degrees wassufficient enough given the requirement for capturing a hemisphere of light at eachsample point.

The high fidelity capture device in [5] improves on both issues above at theexpense of a greater capture time. The spatial resolution of the ILF is limited onlyby the motor’s step size and rig dimensions, and the directional resolution is muchhigher since each light field sample is a separate high-resolution image through a

Page 21: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

12 Background

(a) Mirror Sphere Array

(b) High Fidelity Capture Device

Figure 2.6. Two capture devices presented by Unger et al. [5].

Page 22: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

2.3 Modern Image Based Lighting techniques 13

185 degree fish-eye lens.The two devices mentioned above only allow for spatially varying ILF capture

within a predefined plane. Using the HDR video camera mentioned previously(shown in Figure 2.5), Unger et al. improved on this by allowing the capture oflight field samples along a path[2, 3] and in a volume[4]. In [3], the tracking of lightprobe samples was done using measured markers on the camera rig, along with twoexternal video cameras and commercial video tracking software. There is also men-tion of the possibility that image-based tracking methods could be used in placeof the external video-based tracking. Since the rig in [4] is computer controlled,the tracking can be easily captured along with each light probe acquisition.

Figure 2.7. 1D spatial variance rendered through IBL techniques discussed further inthis thesis, and originally presented by Unger et al. [3]

Figure 2.7 shows the effect of rendering synthetic objects using densely sampledspatially varying data along a straight line path[3]. Figure 2.8 shows the resultof rendering four synthetic objects using ILF data captured in a volume[4]. Thisimage in particular shows some of the latest work in spatially varying data captureand rendering.

Figure 2.8. 3D spatial variance rendered through IBL techniques discussed by Ungeret al. [4]

Page 23: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

14 Background

2.3.3 Improving Rendering EfficiencyAs mentioned previously, calculating global illumination can be an extremely timeconsuming process. Extracting the position of a light source is a vital mechanismfor improving the efficiency and accuracy of the global illumination step as it actsas a kind of importance sampling. Methods such as the median cut algorithm[13]are used to determine the positions of major light sources within a single HDRradiance map and substitute in appropriately valued point-light sources for whichrendering is quick and noise doesn’t exist. For large data sets where it is impracticalto look up individual radiance maps, new techniques had to be developed. Ungeret al. [4] are able to generate an approximation for the global scene bounding boxand extract the various light sources that may be contained within it. This reducesthe overall amount of data that the ILFs represent and not only provides the abilityto improve rendering times, it also allows for an extremely flexible post-productionenvironment; allowing light sources to be altered or removed altogether.

Page 24: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

Chapter 3

Light Probe Sequences

The pipeline for a production describes the sequence of events necessary for itscompletion. For visual effects, the pipeline is comprised of several stages, whichinclude the modeling of objects, the lighting and animation of objects within asequence, the rendering, and the compositing. Working with light probe sequencesrequires the creation of specialised tools that can fit into this pipeline. The toolspresented in this thesis are structured into two main components: a real-timeviewport preview, and an off-line shader for high-fidelity rendering. These aredesigned to fit into the production pipeline in the lighting/animation and renderingstages respectively.

This chapter will examine the techniques required by these tools to work with1D light probe sequences. The following methods will be evaluated with the samedataset used by Unger et al. [3]. For simplicity, each light probe sample wasconsidered uniformly spaced. This scene contained three vertical bars of lightprojected at an oblique angle to the sample path.

The high-fidelity render methods will be considered first as they provide thefundamental approach to working with this type of data. Following on from this,methods will be explored that process this data and allow for real-time interaction.

3.1 Off-line Rendering

3.1.1 Traditional IBLAs mentioned in the previous chapter, traditional IBL uses one light probe toilluminate the entire virtual scene. Unger et al. [2, 3] show how this techniquefails to adequately mimic the real-world lighting when naïvely applied to lightprobe sequences, especially those exhibiting high frequency spatial variations.

Figure 3.1 shows a scene with two neighbouring light probe samples appliedusing traditional IBL techniques. The difference in lighting information betweenthe two samples can be easily observed, and would result in an unpleasant flickeringbetween frames of an animation adopting this approach.

15

Page 25: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

16 Light Probe Sequences

Figure 3.1. Traditional IBL applied to a scene using two neighbouring light probesamples. Image taken from [3].

During the course of experimentation for this thesis, an extension to the tradi-tional IBL method was explored with the goal of removing the flickering artefactsexhibited during animation. For each rendered frame, a variable number of sam-ples on either side of the main sample were also included in the IBL lookup. Eachsample in this set was Gaussian weighted to allow for smoother transitions be-tween areas of high frequency lighting in the sequence. Though the flickering canbe alleviated by using this method, the technique fundamentally lacks the abilityto represent illumination that varies over objects of all sizes, which is object ofthis thesis. Further insight into this problem can be seen by examining what tra-ditional IBL is doing in terms of the plenoptic function[14]: a conceptual tool fordescribing the flow of radiance within a scene.

3.1.2 The Plenoptic FunctionIn its general form, the plenoptic function describes at any time, t, any frequencyof light, λ, travelling through any point in 3D space, x, from any angular direction,~ω, within the scene1. In this form it is a 7D function denoted by:

P (x, ~ω, t, λ) (3.1)

This function is often reduced to 5 dimensions by assuming t is constant andconsidering only the red, green, and blue wavelengths of light. The equation formthus becomes:

P (x, ~ω) (3.2)

Understanding the concept of the plenoptic function allows us to analyse thecapability of any IBL technique. Traditional IBL, when written in terms of theplenoptic function, reduces to 2 dimensions based solely on the angular directionof a ray at any point in space. When viewed in this way it becomes apparentthat this method would fail to represent any spatial variation that a scene may

1This traditional form excludes the polarization of light.

Page 26: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

3.1 Off-line Rendering 17

exhibit. Based on this analysis, the techniques required to adequately representthese lighting effects must utilise a higher degree of the plenoptic function.

The densely sampled light probe sequences, such as those obtained by the RealTime Light Probe (Figure 2.5), can be described by equation 3.2 if we assumea static scene. This satisfies the requirements for representing spatial variation,allowing more complex sampling algorithms to be used when rendering with thesedatasets.

3.1.3 Nearest Neighbour SamplingThe most straight-forward method for using a 1D light probe sequence is to findthe sample along the sample path that is closest to the point being rendered andperform the IBL lookup there. This method is referred to as nearest neighboursampling. Figure 3.2 demonstrates this process in 2D by looking orthogonal tothe sample path.

Sample Path

Light Probe Samples

p

d

Figure 3.2. Nearest neighbour sampling with a 1D light probe sequence. The point p isprojected to the nearest point on the sample path. The nearest sample(s) are then usedas normal radiance maps, indexed with the direction d.

Since this method relies on the position of a point in space as well as thedirection of a ray originating from that point, it appears to satisfy the plenopticdimensionality of the dataset. Figure 3.3 shows a scene rendered using nearestneighbour sampling of a 1D light probe sequence. Though the objects in the scenenow display spatial variation over their surfaces, at closer inspection the methodactually fails to fully replicate the directional properties of the illumination. Givena linear sample path, the scene can be transformed such that the sample path liesalong the z-axis. The nearest neighbour projection simply ignores the x and ycoordinates of any point in order to find the nearest sample along the samplepath. This means that all spatial variations that aren’t orthogonal to the samplepath are treated as thought they are exactly parallel to the sample path. In figure3.3, the light bands within the scene are actually projected from an oblique angleto the sample path, as stated as the beginning of this chapter.

Page 27: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

18 Light Probe Sequences

Figure 3.3. A rendering using nearest neighbour sampling with a 1D light probe se-quence. The vertical light bands are projected at an oblique angle to the sample path,though this method makes it appear as though they are orthogonal. This image alsoshows the noise that is generated by undersampling the environment maps, even thoughover 16000 samples were chosen per environment map and some hours were taken for itto render.

Since all 1D paths can be rotated to the z-axis, and all objects in space rotatedthe same, the sampling method described above can be reduced to the 3D plenopticfunction P (z, ~ω). This proves that the nearest neighbour sampling method doesn’t,in fact, satisfy the plenoptic dimensionality of the dataset.

3.1.4 Ray Projection SamplingThe downfall of the nearest neighbour method is its inability to utilise the full 5dimensions of information that the dataset has to offer. Having another look atFigure 3.2, it becomes obvious that the direction d, when translated down to thesample path, would almost certainly point toward lighting information that is notappropriate to the same direction at point p.

To fix this, ray projection sampling, as discussed by [3] and shown in Figure3.4, would find a far more appropriate sample to use. The ray at each point,r(t) = p + td, should be extended toward the linear sample path to find theshortest distance between these two lines. This indicates a better sample regionto use for IBL lookup in the direction d. The equations that do this for samplesalong the z-axis are presented in [3] and are listed below in an adapted form:

r(t) = p + td

∆ =√

rx2 + ry2

pproj = p + (arg min∆(t))dzproj = pproj,z (3.3)

If point of shortest distance falls between two light probe samples then an ap-propriate interpolation should be used which would mainly depend on how dense

Page 28: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

3.1 Off-line Rendering 19

the data was sampled, and the size of the object being rendered. For densely sam-pled light probe data being rendered onto large objects, the interpolation schemebecomes less important since the distance between samples only occupies a rela-tively small region (possibly sub-pixel). The converse scenario would require moresophisticated interpolation to try and maintain sharpness between areas of highfrequency variations in light. The tools created for this thesis always use a linearinterpolation scheme for blending between colour values of neighbouring samples.

Sample Path

Light Probe Samples

p

d

interpolate

Figure 3.4. Ray projection sampling with a 1D light probe sequence. The direction dis projected to the sample path from point p, indicating the better samples to choose.

Figure 3.5 shows the ray projection method applied to a simple scene. Inthis example, the oblique lighting direction is now represented on all renderedsurfaces. As the points being rendered diverge away from the sample path, thelighting continues to illuminate them from the same oblique angle.

Figure 3.5. Ray projection sampling with a 1D light probe sequence. This methoddisplays better results for points diverging from the sample path. This image also showsthe noise that is generated by undersampling the environment maps, even though over16000 samples were chosen per enviroment map and some hours were taken for it torender.

Page 29: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

20 Light Probe Sequences

3.1.5 Single-Viewpoint ReprojectionFollowing on from ray projection sampling, a further adjustment must be made tobe sure that the correct sample is used. This adjustment is based on the physicalsize of the mirror-sphere and, given a straight line sampling path along the viewdirection, the θ component of the projected ray. This is discussed by [3], in whichthe following equations were presented:

θ

2= arcsin r

R

z = z0 − r · cos θ2− r · tan(θ − π

2) (3.4)

Where r, R, θ, and z0 are describe in Figure 3.6.

Figure 3.6. The single-viewpoint reprojection method. z0 (green) is the position of themirror sphere at each frame, while the red points, z, are the projection points of eachincident ray back to the sample path. R is the radius of the mirror sphere and r is theradial distance from the sample path to the ray’s intersection point. Image taken from[3].

The offset z must be used in conjunction with the zproj value, found using theray projection method, to obtain the correct region for choosing which light probesamples to use. This technique is especially important for correct rendering usingdensely sampled light probe sequences; where the spatial sampling is much smallerrelative to the size of the mirror sphere.

This technique is not used for this thesis in order to simplify the calculationsduring real-time rendering. For consistency, it was also not applied to the off-linerendering shaders.

3.2 Real-time Diffuse RenderingDue to the potentially long rendering times for each frame, it may become imprac-tical to rely on a trial and error approach when working with ILF data. This can

Page 30: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

3.2 Real-time Diffuse Rendering 21

be further accentuated in production environments where a queue system con-trols access to the render-farm which animators use. A real-time interface intoa spatially varying dataset provides greater efficiency for animators and lightingdirectors, as it removes the need for costly trial and error rendering by providingan interactive approximation of the outcome.

Figure 3.7. A scene lit bit a grill-covered spot light. The diffuse materials in the bottomrow reveal more about the spatially varying light than the specular materials in the toprow.

The real-time methods presented below focus on the efficient representation ofdiffuse materials. The reason for this is that diffuse materials are able to bettervisualise the spatial variance within the scene than specular materials do, as seenin Figure 3.7.

3.2.1 Spherical HarmonicsAn extremely small and efficient representation of Lambertian diffuse materialscan be obtained by using spherical harmonics. Describing spherical harmonics indepth is out of the scope of this thesis, so only a few details will be presentedbelow. More detailed and well explained information on this subject can be foundin [6] and [7].

Overview

A spherical function is one that is defined on the surface of a sphere. The aim ofa spherical harmonic projection is to decompose a spherical function into a sumof weighted basis functions; the same principle as Fourier transforms applied to1D and 2D functions. This allows a potentially complex spherical function to berepresented, in a simple way, by a list of basis function coefficients. The inverse

Page 31: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

22 Light Probe Sequences

process, summing all the basis functions weighted by their coefficients, allows theoriginally function to be reconstructed.

The following sections will briefly describe the process of projection and re-construction using spherical harmonics. The descriptions below will only involvereal-valued functions, though complex-valued functions are also able to be han-dled by this method. This is because diffuse irradiance estimation and sphericalharmonic lighting only require real-valued functions for their calculation.

Basis Functions

As stated in [7], the functions, pn(x), that satisfy an orthogonality relation over aspecific domain [a, b] are known as basis functions. The orthogonality relation canbe expressed as:

b∫a

w(x)pn(x)pm(x)dx = cnδnm = cmδnm (3.5)

Where:

δnm =

1 n = m

0 n 6= m

and w(x) is an arbitrary weighting function independent of n and m.The importance of this property allows any real-valued function to be repre-

sented as the sum of weighted basis functions, given an infinite amount of basisfunctions. For a limited set of basis functions, the resulting reconstruction isa band-limited approximation. This property is ideal when decomposing low-frequency spherical functions, such as diffuse lighting, as a finite series shouldproduce adequate results with a concise representation.

The associated Legendre polynomials are a set of real-valued functions whichcan satisfy the orthogonality relation for the interval [−1, 1]. They are defined as:

Pml (x) = (−1)m

2ll!√

(1− x2)m dl+m

dxl+m(x2 − 1)l (3.6)

where l ∈ N0 and 0 ≤ m ≤ l. The value of l determines the band of functions.The requirement for making these sets of functions satisfy the orthogonality re-lation is to keep either the m or l value constant throughout the calculation ofEquation 3.5. That is, if pn(x) is the associated Legendre polynomial Pml (x) thenpm(x) must be either Pml′ (x) or Pm′l (x) in order for the orthogonality relation tosucceed. Because of this requirement, the associated Legendre polynomials alonewill not suffice as a complete spherical harmonic basis.

Another set of orthogonal functions are sine and cosine, and are used as part ofFourier analysis. These functions satisfy the orthogonality relation on the interval[−π, π] and can be used in conjunction with the associated Legendre polynomialsto build a spherical harmonic basis. For the following basis functions definition,l ∈ N0 and −l ≤ m ≤ l:

Page 32: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

3.2 Real-time Diffuse Rendering 23

yml (θ, φ) =

2Nml cos(mφ)Pml (cos θ) m > 0

N01P

01 (cos θ) m = 0√

2N |m|l sin(|m|φ)P |m|l (cos θ) m < 0(3.7)

Where:

Nml =

√2l + 1

4π(l −m)!(l +m)!

(3.8)

is the normalisation component derived from the complex-valued spherical har-monic series, as shown by Schönefeld [7].

Figure 3.8 shows a graphical representation of the first four bands of the spher-ical harmonic basis functions described in equation 3.7.

Figure 3.8. The first four bands of the spherical harmonic basis functions. The colourgreen indicates positive values, while red indicates negative. This image was taken from[6].

Projection and Reconstruction

Now that the spherical harmonic basis functions have been defined, a real-valuedspherical function can be decomposed. This is called spherical harmonic projec-tion.

The process of projection is to calculate a scalar value kn that represents howmuch the original function, f , looks like each of the basis functions, pn. Thisprocedure simply involves taking the integral of product of f and pn over the fulldomain of f : ∫

f(x)pn(x)dx = kn (3.9)

Page 33: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

24 Light Probe Sequences

Figure 3.9. An example of function projection (decomposition into basis function coef-ficients) and the subsequent reconstruction. This image was taken from [7].

This process can be seen applied to the first few Legendre polynomials in Figure3.9(a). The process of reconstructing the function f is achieved by summingeach of the basis functions weighted by its associated coefficient. For perfectreconstruction this is an infinite sum, however we will only be dealing with finiterepresentations, so the resulting function , f , is a band-limited approximation ofthe original:

f(x) =N∑n=0

knpn(x) (3.10)

This procedure can be seen in Figure 3.9(b), which uses the constants calculatedfrom the projection in the neighbouring image.

Diffuse Projection and Reconstruction of Environment Maps

To find the basis function coefficients of an environment map we must be able tointegrate over the sphere of radiance values that the environment map represents.For this thesis, two methods were tested:

Monte Carlo integration - N uniformly distributed random samples on theunit sphere are used to index the environment map. Since they are uniformlydistributed over the sphere the normalisation of the integration becomes 4π

N .

Riemann Sum - All the valid pixels in the environment map are iterated through,ensuring all radiance information is present in the calculation. In this the-sis, mirror-sphere environment maps were used, so N ≈ X2π

4 where X isthe number of pixels along one edge of the image (assumed square). Like

Page 34: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

3.2 Real-time Diffuse Rendering 25

the Monte Carlo method, the normalisation of the integration is 4πN due to

the even spread of pixels within a mirror sphere environment map[1]. Thismethod of integration is used in preference to Monte Carlo integration sinceit is more accurate and is quick enough to process on commodity hardware.Further discussion on implementation and performance is presented in Sec-tion 4.2.2.

Other than the choice of sample points, the algorithm for projection remainsthe same for each method. Each chosen sample point in the environment map,(u, v), corresponds to a direction which can written in terms of θ and φ, as shown inAppendix A. Given a θ and φ value and the radiance from that direction, L(θ, φ),each basis function coefficient, kml , is calculated by the following equation:

kml = 4πN

N∑i=1

yml (θi, φi) L(θi, φi) (3.11)

Ramamoorthi and Hanrahan [15] show that a Lambertian diffuse represen-tation of an environment map can be sufficiently approximated by the first ninespherical harmonic basis functions, those in the first 3 bands (l ≤ 2). This is due tothe low-frequency nature of diffuse lighting variation, which suits the low-frequencynature of the functions occupying these bands. The result of this projection arethe nine associated floating point coefficients, which can be used to generate thediffuse lighting in real-time on commodity hardware. Since we’re dealing withreal-valued spherical functions, each colour channel of the environment map mustbe considered individually. This means that, for an RGB image, three sets of 9coefficients are generated.

All the necessary calculations for doing this are presented by Ramamoor-thi and Hanrahan [15, 16], and some more verbose descriptions by Schönefeld[7], Green [6]. In short, for each point on the surface being rendered, the irradi-ance is determined by all light incident from the hemisphere centered on its normal,~n = (x, y, z). This hemisphere can be determined by all (~n · ~ω) ≥ 0. Expressed interms of a convolution, the irradiance becomes:

E(~n) =∫

Ω(~n)

L(~ω)(~n · ~ω)dω

= L ? max(~n · ~ω, 0)

Ramamoorthi and Hanrahan [15] present two forms for calculating this. Thematrix form, with the normal expressed in homogeneous form, nt = (x, y, z, 1):

E(n) = ntMn (3.12)Where:

M =

c1k

22 c1k

−22 c1k

12 c2k

11

c1k−22 −c1k2

2 c1k−12 c2k

−11

c1k12 c1k

−12 c3k

02 c2k

01

c2k11 c2k

−11 c2k

01 c4k

00 − c5k0

2

(3.13)

Page 35: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

26 Light Probe Sequences

c1 = 0.429043c2 = 0.511664c3 = 0.743125c4 = 0.886227c5 = 0.247708 (3.14)

And the polynomial form:

E(n) = c1k22(x2 − y2) + c3k

02z

2 + c4k00 − c5k0

2

+ 2c1(k−22 xy + k1

2xz + k−12 yz)

+ 2c2(k11x+ k−1

1 y + k01z) (3.15)

Rendering with Light Probe Sequences

Each environment map in the sequence can be easily processed, per colour channel,into this succinct diffuse representation. Even with the 500 light probe sequenceused for testing these tools, the processed data still remains extremely small,allowing it to reside in texture memory with a very small footprint. Due to thisfactor, all the coefficient data was stored in the matrix form describe by Equation3.13 to make texture lookup and GPU calculations more straightforward. This isdescribed in more detail in Chapter 4 and Appendix B.1.

Figure 3.10. Real-time rendering of a 1D light probe sequence using the sphericalharmonic method. The same parallax errors occur as the nearest neighbour methodpresented in Section 3.1.3.

Though this method for rendering diffuse lighting is very fast and efficient,even for large light probe sequences, it suffers from the exact same problem asthe nearest neighbour method presented in Section 3.1.3. Since the irradiance inEquations 3.13 and 3.15 are a function of the normal only, they exhibit the sameparallax errors as the rendered points diverge from the sample path, which is seen

Page 36: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

3.2 Real-time Diffuse Rendering 27

in Figure 3.10. The Future Work section on page 41 discusses an alternate process-ing technique which allows spherical harmonics to be used within an extrapolatedvolume of samples. This technique was not explored in this thesis due to the timeconstraints imposed.

3.2.2 Down-SamplingDue to the low-frequency nature of diffuse lighting, a down-sampling of the en-vironment map can provide a valid diffuse representation. By selecting a uniquesample set of unit-length directions, an environment map can be split into Voronoiregions where each pixel (and its associated direction) can be uniquely associatedwith its closest sample. These pixels within each region of the environment mapare averaged and the resulting value associated with the primary direction forthat region. This allows a ray-projection algorithm to be employed where, foreach point being illuminated, the set of directions can determine the correct set ofdown-sampled information to use.

Generating the set of unit-length directions can be done in many ways. Ideally,the directions should correspond to the main directions of light sources withinthe environment map. However, the directions chosen must be kept the samewhen processing all the environment maps in the sequence, due to the renderingalgorithm remaining independent of any one map. Analysing the algorithm in theSection 4.2 and the GLSL code in B.2 makes this more clear. Because of the non-trivial nature of finding optimal sample directions for any sequence, two genericmethods for generating them were tested:

Uniform Polar Samples - Generate exactly N number of samples in the 2Dspherical coordinate plane. These samples are found using the followingalgorithm:begin

pairList = Find integer pairs,(a, b), s.t. N − (a× b) ≤ 2(a, b) = Search pairList for pair with smallest |a− b|remainder = N − (a× b)if (remainder > 0)

Add sample at top poleGenerate max(a, b) azimuth samplesGenerate min(a, b) polar samples between polesif (remainder == 2)

Add sample at bottom poleend

Figures 3.11(a) and 3.11(b) show examples of the regions formed by uni-formly sampling the spherical coordinate plane. Twelve and 42 sample pointswere chosen for comparison with following method.

Icosahedron and its subdivisions - The 12 vertices of an Icosahedron are uni-formly distributed over the surface of a sphere, producing uniform Voronoi

Page 37: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

28 Light Probe Sequences

regions. Successive subdivisions of the Icosahedron produce near-uniformvertex distributions which are still good enough. The regions produced byan Icosahedron and its first subdivision can be seen in Figures 3.11(c) and3.11(d).

(a) 12 Polar Sample Regions (b) 42 Polar Sample Regions

(c) 12 Icosahedron Sample Regions (d) 42 Icosahedron-based Sample Regions

Figure 3.11. Rendering of a sphere showing the difference in regions between thespherical sampling schemes. The positive directions for the x(red), y(green), and z(blue)axes are shown. The green dots represent the primary sample direction for that region.Figures 3.11(a) and 3.11(b) show the non-uniform regions produced by uniform samplingin polar coordinate space. Figures 3.11(c) and 3.11(d) show the more uniform regionsproduced by using the vertices of an Icosahedron and its first subdivision respectively.

For creating sample directions in a generic way, it is desirable for all directionsto be uniform spaced over the surface of the sphere, which would lead to uniformregions associated with each direction. In the polar sampling approach, the regionsat the top have much smaller area than those around the equator, which would leadto inaccuracies and bias when rendering with this information. For the Icosahedron

Page 38: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

3.2 Real-time Diffuse Rendering 29

and subdivision approach, all regions are more evenly distributed allowing a betterindication of the illumination around that sample direction. The only downfallis that it is not possible to generate an arbitrary number of sample directions.Uniformly sampling the surface of sphere with an arbitrary number of samplesis a non-trivial process. A simulated annealing approach was attempted duringexperimentation, but the complexity was deemed out of the scope of this thesisand was duly dropped.

Once the samples have been decided upon, each environment map in the se-quence can be processed to find the average radiance per region. For N sampledirections, the processing of each environment map produces N RGB radiance val-ues, representing the average radiance for the Voronoi region around each sampledirection.

(a) 12 Polar Samples (b) 42 Polar Samples

(c) 12 Icosahedron Samples (d) 42 Icosahedron Samples

Figure 3.12. Real-time rendering of a 1D light probe sequence using the down-samplingmethod. The same four samples sets shown in Figure 3.11 are used.

Just like the spherical harmonic method, this data can be used with moderngraphics hardware and custom shaders. However, real-time rendering with thisinformation is far more computationally expensive than the spherical harmonicmethod. The algorithm for processing this data involves a for-loop over all sam-ple directions, and for an arbitrary number of directions this becomes extremelyinefficient on a GPU. Interactive framerates were still able to be maintained givena reasonable number of samples (details discussed below). More details regardingthe implementation of this shader can be seen in Section 4.3 and Appendix B.2.

Figure 3.12 shows the effects of rendering with the same sampling data shown

Page 39: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

30 Light Probe Sequences

in Figure 3.11. Though problems with parallax still exist, the representation ofthe environment lighting is able to be improved upon. The four images displayhow vastly different the results can be when the sample directions that are chosenfit poorly with the environment lighting.

Figure 3.12(a) shows almost no difference compared to the spherical harmonicmethod. The reason for this can be seen more seen more clearly when looking thesamples it is using, shown in 3.11(a). Each of the samples lie in either the x-yplane or the y-z plane, so all the ray projections will be non-oblique. Figure 3.12(c)improves on this by including samples in the oblique regions, which can be seenon the red cylinder. A definite under-sampling in the upper hemisphere as causethese samples misrepresent the illumination of the floor. Both twelve-sample testsobviously lack the angular resolution to handle this scene.

As you would expect, a larger number of samples results in better represen-tation of the scene lighting, as shown in figures 3.12(b) and 3.12(d). This comesat a greater cost in processing time, reducing the framerate dramatically from theprevious tests. The response time for this still remains adequate for interaction,and since this provides a closer representation of the actually lighting it will bevery much desired by the relevant directors and animators.

The limiting factor for performance of this technique is the number of fragmentsneeding to be rendered, not the number of triangles in the scene. For tests run inan 800x800 viewport on a nVidia Quadro FX 2500M, the 12 sample datasets ran at20 FPS for viewport-covered surfaces, and over 50 FPS when approximately halfthe viewport was covered. For the 42 sample datasets, a full viewport of fragmentsrendered at around 5-8 FPS, while an approximately half covered viewport ran at15 FPS.

Generic sampling of the environment maps requires a greater number of sam-ples for adequate reproduction of illumination. An understanding of the prominentlight sources within the scene would enable a better sampling of the environmentmaps through the manual specification of directions.

Page 40: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

Chapter 4

Implementation and Usage

The following sections will present specific details regarding the implementationand use of the tools created during this thesis. The tools include:

• A light probe sequence processing utility for generating the data used forreal-time rendering,

• A Maya hardware shader for real-time interaction, and

• mental ray light and environment shaders for using light probe sequences.

4.1 Ray Projection AlgorithmCommon to both real-time and off-line shaders is the ray projection algorithm.This is described in Section 3.1.4 and by Unger et al. [3], and is a special casefor finding the closest point between two lines. Equation 3.3 labels this point aszproj , which can be determined by solving (arg min∆(t)) and applying it to theray equation r(t). To find the minimum ∆, where ∆ ∈ R+, is the same as findingthe minimum of ∆2, allowing the task of finding the minimum to be considerablyeasier. This process can be seen in Appendix C and allows pproj to be calculatedby the equation:

pproj = p− pxdx + pydydx2 + dy2 · d

4.2 ProcessingPre-processing the light probe sequence allows us to use its diffuse informationin a real-time setting. As mentioned in Section 3.2, two types of real-time ren-dering methods were tested: one using a spherical harmonic representation, andthe other using a direction-based down-sampling of the environment map. Thefollowing sections will outline the exact algorithms used to process the data intotheir relevant forms.

31

Page 41: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

32 Implementation and Usage

4.2.1 Spherical Harmonic RepresentationA small spherical harmonics projection library was created to allow for directcomputation of HDR environment maps. This code is simply an implementationof Equation 3.7 that is able to calculate coefficients of the three colour channelssimultaneously.

Listing 4.1 show how to use the Riemann sum method (described in Section3.2.1) for projecting an environment map to obtain the spherical harmonic coef-ficients. This algorithm assumes an image iterator exists which can only pointto appropriate pixels in the environment map. This iterator also calculates therelevant θ and φ values that pixel represents:

Colour [ ] SHProject ion (EnvMap, numBands)

Colour s hCo e f f i c i e n t s [ numBands ∗ numBands ]numSamples = 0for p i x e l = a l l p i x e l s in EnvMap

for l = 0 to numBandsfor m = − l to l

index = l ∗( l +1) + mshCo e f f i c i e n t s [ index ] = Y( l , m, p i x e l . theta , p i x e l . phi ) ∗

p i x e l . c o l ou rnumSamples++

// Normalise the c o e f f i c i e n t sfor i := 0 to numbands∗numbands

s hCo e f f i c i e n t s [ i ] ∗= 4∗PI / numSamplesreturn s hCo e f f i c i e n t s

Listing 4.1. Spherical Harmonic projection algorithm

Once all the coefficients for an environment map are calculated, they are putinto the matrix form shown in Equation 3.13. Each colour channel’s matrix is thensplit into its column vectors and stored in a binary file. A group of three matrices,one for each colour channel, is referred to as a set. There are as many sets as thereare images in the light probe sequence. Once the processing is complete, all sets’matrices are stored in binary and input as a texture into graphics hardware. Figure4.1 shows the representation from the perspective of a texture. Each pixel in thistexture corresponds to a column vector for a matrix. These column vectors canbe read in, combined into a matrix, and applied to a normal vector very quicklyon the GPU. The GLSL code for this is shown in Section B.1.

4.2.2 Down-Sampled RepresentationA small class for creating spherical samples in both polar and Icosahedron sub-division form provides the basis for the tests used in this thesis. This algorithmis not limited to these types of samples, as discussed in Section 3.2.2, and maybenefit strongly from user-defined input.

In Listing 4.2 the algorithm for down-sampling one environment map is pre-sented. Just as in Listing 4.1, an image iterator is assumed to exist which returns

Page 42: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

4.2 Processing 33

R

AB

GR

AB

GR

AB

GR

AB

GR

AB

GR

AB

GR

AB

GR

AB

GR

AB

GR

AB

GR

AB

GR

AB

G

R

AB

GR

AB

GR

AB

GR

AB

GR

AB

GR

AB

GR

AB

GR

AB

GR

AB

GR

AB

GR

AB

GR

AB

G

R

AB

GR

AB

GR

AB

GR

AB

GR

AB

GR

AB

GR

AB

GR

AB

GR

AB

GR

AB

GR

AB

GR

AB

G

set 1

set 2

set 3

column vectors

red matrix green matrix blue matrix

Figure 4.1. The binary format of the spherical harmonic data processed from the lightprobe sequence, presented in terms of its use as a texture on the GPU.

only relevant pixels in the environment map. In this case, the pixel informationalso contains the direction that pixel corresponds to in the environment map.

struct SampleInfo

Colour tota lRad iance ;Colour averageRadiance ;int numPixels ;

Sample [ ] DownSample (EnvMap, Vector d i r e c t i o n s [ ] )

SampleInfo s I n f o [ d i r e c t i o n s . s i z e ]for p i x e l = a l l p i x e l s in EnvMap

f i nd d i r index in d i r e c t i o n ssuch that d i r i s c l o s e s t to p i x e l . d i r e c t i o n

s I n f o [ d i r ] . to ta lRad iance += p i x e l . co l ou rs I n f o [ d i r ] . numPixels++

for i = 0 to s I n f o . s i z es I n f o [ i ] . averageRadiance =

s In f o [ i ] . to ta lRad iance / s I n f o [ i ] . numPixelsreturn s I n f o

Listing 4.2. Down-Sampling algorithm

Finding the closest sample direction for each pixel’s direction can be evaluatedby taking the cross product between the directions and finding the one with themaximum value. For each pixel this has to be done N times, where N is thenumber of sample directions. If this algorithm is applied to an environment mapsequence with images of the same dimensions, a lookup table can be easily begenerated that stores the index of the closest sample direction for each pixel in the

Page 43: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

34 Implementation and Usage

image. This optimisation can greatly increase the performance of this algorithm.By also adding multi-threaded computation for each image, the tests run on a1.8GHz dual-core dual-processor machine were able to down-sample a 500 imagelight probe sequence in 42 directions in under 10 seconds, with the bottleneckbeing harddisk I/O. This allows for extremely quick testing of various directionsto find an optimal set.

The information produced by the down is split between two texture representa-tions that are stored in graphics hardware. The first representation is a 1D texturethat stores all the sample directions. The second is a 2D texture comprised of av-erage radiance values, one per direction. Both of these texture formats are shownin Figure 4.2. The corresponding GLSL code that utilises these textures is shownin Section B.2.

BGR BGR BGR

dd d dN1 2 i

direction vectors

(a) Sample Directions Texture Format

BGR BGR BGR

BGR

BGR

BGR

BGR

BGR

BGR

radiance in each direction

dd d dN1 2 i

set 1

set 2

set 3

(b) Sample Radiance Texture Format

Figure 4.2. The binary format of the down-sampled data processed from the light probesequence, presented in terms of its use as textures on the GPU.

4.3 Maya Hardware ShaderImplementing a simple Maya hardware shader involves a basic understanding ofthe Maya design and C++ API. References for this are [17], Gould [18], and the ex-tensive online documentation and example code that comes with an installation of

Page 44: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

4.3 Maya Hardware Shader 35

Maya. Sample code for doing this is found in the $MAYA_INSTALL_DIR/devkit/plug-ins directory, with the various examples having filenames starting with hw.

Hardware shading nodes extend from the class MPxHwShaderNode and mustprovide, at minimum, implementations for:

• glBind(...) - Generally binds all textures, shaders, and other resourcesbefore a draw call.

• glUnbind(...) - Generally does the unbinding of the previously boundresources.

• glGeometry(...) - Draws the geometry for this object using the argumentspassed to the function.

More about these calls can be found in [17] which describes the different schemesthat may change the role of these functions, especially the glBind and glUnbindcalls.

As suggested by these function names, Maya expects OpenGL API calls inorder to bind, draw, and unbind the information. This gives the ability for theseplugins to be used in the majority of major operating systems without the needfor a rewrite.

The hardware shader in this thesis required a set of attributes for manipulatingthe data, as shown in Figure 4.3. These attributes exist both as static MObjectvariables and as private instance variables. More on the coding specifics can beseen in the examples. These attributes allow for colour manipulation and scaling,and data set scaling.

Global Scaling Describes the overall scale value applied to the radiance values,

Colour Scaling Scales each colour channel individually,

Zero and One All values below Zero are cut, and all the remaining radiancevalues in the dataset are scaled by the factor 1

One−Zero .

Sample Path Scale This value scales pproj,z, allowing the sample space to bestreched or shrunk.

Sample Min All samples prior to this are not considered.

Sample Max All samples after this are not considered.

The outColor attribute of the hardware shader must connect to the hardware-Shader attribute in a native basic Maya material such as Lambert, Phong, Blinnetc. This Maya material is then connected to the surfaceShader attribute of anyshading group that wishes to use the hardware shader. Figure 4.4 shows the shad-ing graph for an object that renders off-line using a mental ray material, but usesthe hardware shader through the Lambert Maya material for viewport rendering.

The mental ray materials connect to the miMaterialShader attribute of thesame shading group.

Page 45: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

36 Implementation and Usage

Figure 4.3. The hardware shader attribute editor.

Figure 4.4. The hardware shader attribute connections.

Page 46: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

4.4 Mental ray Shaders 37

Figure 4.5. A screenshot of the Maya hardware shader in action.

4.3.1 LimitationsThere are several limitations in this maya hardware shader that can be rectifiedin future versions. Two of the most important are:

1. The dataset cannot be transformed using the Maya translate, rotate, andscale tools. The addition of this would allow for a much easier use of thesedatasets.

2. The hardware shader doesn’t support the material properties of the objectswithin the scene. At a minimum, the objects colours should be trivial torender, however other properties such as colour due to texture and bump-mapping may be harder to implement as they would have to be incorporatedinto the hardware shader.

4.4 Mental ray ShadersMental ray offers a variety of shader types that suit almost every need. For thisthesis, the following two shaders were required for the most basic implementationof high-fidelity light probe sequence rendering. Shadows were turned off as theyproduced artifacts that could not be debugged in time.

4.4.1 Environment ShaderAn environment shader is typically called by mental ray when a ray leaves a non-enclosed scene. In a traditional IBL implementation, this shader would performa lookup in an environment map corresponding to the direction the ray left the

Page 47: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

38 Implementation and Usage

scene. Since a more complex set of data is available, a custom environment shadermust be created that performs ray-projection to evaluate the correct colour. Theeffect of the environment shader is seen most clearly in specular surfaces since theyreflect the environment.

For the test scenes, the environment shader was attached to the camera, allow-ing it to become the default environment shader for all objects within the scene.

4.4.2 Area Light ShaderThe direct lighting effects, most visible on diffuse surfaces, are achieved througha custom Area Light shader. This implementation requires the user to create aspherical area light source around the scene and attach this custom shader toit. The ray-projection algorithm doesn’t directly support importance samplingor other such optimisations, so many samples on the spherical light source arerequired for ensuring a low-noise image. For these examples, 16384 samples wereused leading to very long render times.

(a) Scene 1 with no shadows. (b) Scene 1 with shadows.

(c) Scene 2 with no shadows. (d) Scene 2 with shadows

Figure 4.6. Two scenes showing the scene rendered improperly when shadow tracingwas enabled.

4.4.3 IssuesThough the shaders above reproduce the spatially varying lighting captured inthe dataset, various issues were encountered during the creation of the mental rayshaders which resulted in substandard renderings.

Page 48: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

4.4 Mental ray Shaders 39

The most prominent issue which reduces the overall realism is the lack ofshadowing. All attempts to enable shadow tracing resulted in strange artifacts,seen in Figure 4.6, and due to limited time and experience with mental ray thisissue was not resolved.

Another unresolved issue was the skewing of the spatially varying data. Thiscan be seen in Figure 4.7

(a) Rendered scene showing the skewing issue.

(b) Maya viewport screenshot showing the orientationof the scene.

Figure 4.7. The skewing of spatial variance. This is a top-view of the xz-plane, thesample path (z-axis) located toward the middle of the image, as seen in 4.7(b).

Page 49: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap
Page 50: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

Chapter 5

Conclusion

This thesis has produced a set of tools that show it is possible for 1D spatiallyvarying light probe sequences to be used within a production environment. Givenmore development time, these tools should be easily integrated into any productionpipeline allowing for more accurate illumination of virtual objects within a scene.

The biggest limiting factor for this adoption into the industry is the lack ofavailable real-time HDR capture devices. Without these devices, the capture pro-cess of HDR scene illumination is too long to be considered feasible in a time-critical environment such as a film set. Given that dense sampling is required formeasuring high frequency spatial variations in illumination, a long capture timebecomes even more impractical. The experimental setup of the real-time lightprobe, while not providing a complete commercial solution, has allowed this typeof data to be captured, and has provided a basis for which it can be processed andused to render highly realistic images.

5.1 Future WorkThose wishing to extend on this thesis may want to consider the following futurework suggestions:

• Current unpublished works within this department at Linköping universityhave also shown the feasibility of a real-time diffuse and glossy preview ofincident light fields captured as a volume of light probes, based on the workof Unger et al. [4]. A Maya hardware shader with this capability would beof great benefit.

• Down sampling the radiance maps, in the same way that is done for real-time rendering, and using that in a mental ray user-defined area light sourceshader. This was actually attempted in the course of experimentation, butdue to time constraints was not in a state for inclusion into this thesis. Itdoes, however, provide a great way to speed up rendering times and ensurethat all light is considered in some way.

41

Page 51: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

42 Conclusion

• Provide better interfaces to the current set of tools by working more closelywith industry specialists.

• Handle the non-uniform spacing between light probe samples for more accu-rate reproduction of the lighting.

• Use the same ray-projection technique to handles non-linear sample paths.This would require a more sophisticated algorithm for determining the closestsample point(s) to the desired ray-direction.

• Extrapolate the current 1D light probe sequence into a volume of samplepoints for real-time rendering. This can transform the non-uniformity ofprobe samples into a structured grid allowing for quick lookups into thedata.

Page 52: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

Bibliography

[1] Paul Debevec. Rendering synthetic objects into real scenes: bridging tra-ditional and image-based graphics with global illumination and high dy-namic range photography. In SIGGRAPH ’98: Proceedings of the 25thannual conference on Computer graphics and interactive techniques, pages189–198, New York, NY, USA, 1998. ACM. ISBN 0-89791-999-8. doi:http://doi.acm.org/10.1145/280814.280864.

[2] Jonas Unger, Stefan Gustavson, and Anders Ynnerman. Densely sam-pled light probe sequences for spatially variant image based lighting. InGRAPHITE ’06: Proceedings of the 4th international conference on Com-puter graphics and interactive techniques in Australasia and Southeast Asia,pages 341–347, New York, NY, USA, 2006. ACM. ISBN 1-59593-564-9. doi:http://doi.acm.org/10.1145/1174429.1174487.

[3] Jonas Unger, Stefan Gustavson, and Anders Ynnerman. Spatially varyingimage based lighting by light probe sequences: Capture, processing and ren-dering. The Visual Computer, 23(7):453–465, July 2007.

[4] J. Unger, S. Gustavson, P. Larsson, and A. Ynnerman. Free form incidentlight fields. Computer Graphics Forum, 27(4):1293–1301, June 2008.

[5] Jonas Unger, Andreas Wenger, Tim Hawkins, Andrew Gardner, and PaulDebevec. Capturing and rendering with incident light fields. In EurographicsSymposium on Rendering: 14th Eurographics Workshop on Rendering, pages141–149, June 2003.

[6] Robin Green. Spherical harmonic lighting: The gritty details, 2003. URLhttp://www.research.scea.com/gdc2003/spherical-harmonic-lighting.pdf.

[7] Volker Schönefeld. Spherical harmonics, 2003. URLhttp://heim.c-otto.de/∼volker/prosem_paper.pdf.

[8] James T. Kajiya. The rendering equation. In Computer Graphics (Proceed-ings of SIGGRAPH 86), pages 143–150, August 1986.

[9] Henrik Wann Jensen. Realistic Image Synthesis Using Photon Mapping. A.K. Peters, Natick, MA, 2001.

43

Page 53: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

44 BIBLIOGRAPHY

[10] László Szirmay-Kalos. Monte-carlo global illumination methods state of theart and new developments. In 15th Spring Conference on Computer Graphics,pages 3–21, April 1999.

[11] Alain Fournier, Atjeng S. Gunawan, and Chris Romanzin. Common illumi-nation between real and computer generated scenes. Graphics Interface ’93,pages 254–262, May 1993.

[12] Paul E. Debevec and Jitendra Malik. Recovering high dynamic range radiancemaps from photographs. In Proceedings of SIGGRAPH 97, Computer Graph-ics Proceedings, Annual Conference Series, pages 369–378, August 1997.

[13] Paul Debevec. A median cut algorithm for light probe sampling. In SIG-GRAPH ’05: ACM SIGGRAPH 2005 Posters, page 66, New York, NY, USA,2005. ACM. doi: http://doi.acm.org/10.1145/1186954.1187029.

[14] Edward H. Adelson and James R. Bergen. The plenoptic function and theelements of early vision. In Computational Models of Visual Processing, pages3–20. MIT Press, 1991.

[15] Ravi Ramamoorthi and Pat Hanrahan. An efficient representation for irra-diance environment maps. In Proceedings of ACM SIGGRAPH 2001, Com-puter Graphics Proceedings, Annual Conference Series, pages 497–500, Au-gust 2001.

[16] Ravi Ramamoorthi and Pat Hanrahan. On the relationship between radianceand irradiance: determining the illumination from images of a convex lamber-tian object. Journal of the Optical Society of America A, 18(10):2448–2459,2001. URL http://josaa.osa.org/abstract.cfm?URI=josaa-18-10-2448.

[17] Autodesk Maya. Maya hardware shader api, 2008. URLhttp://usa.autodesk.com/adsk/servlet/index?siteID=123112&id=9469002.

[18] David Gould. Complete Maya Programming: An Extensive Guide to MELand C++ API. Morgan Kaufmann, first edition, 2003.

Page 54: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

Appendix A

Remapping equations

The following shows the equations for mapping between 3D space representationsand mirror-sphere image coordinates. An assumption made for the relevant equa-tions is that (0, 0, 1) corresponds to the center of the mirror-sphere image.

(w, h) the width and height of the image(x,y) the discrete-valued pixel coordinates of the image, with (0, 0) be-

ing the lower-left coordinate, x ∈ [0, w), and y ∈ [0, h).(x, y, z) a unit-length vector in 3D space.(u, v) the normalised image coordinates with (0, 0) at the center of the

image, and u, v ∈ [−1, 1]. u corresponds to the horizontal axis,while v corresponds to the vertical axis.

(θ, φ) the spherical coordinates where θ ∈ [0, π], and φ ∈ [0, 2π]. θ de-fines the polar angle starting from z+, and φ defines the azimuthalangle starting from x+ and increasing in the direction of y+. SeeFigure A.1.

x

z

y

θ

φ

Figure A.1. The coordinate system in use.

45

Page 55: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

46 Remapping equations

Image-Space to Normalised Image-Space Coordinates

u = 2( xw − 1

− 0.5)

v = 2( yh− 1

− 0.5) (A.1)

Normalised Image-Space to Image-Space Coordinates

x = (u+ 1)2

(w − 1)

y = (v + 1)2

(h− 1) (A.2)

Cartesian to Spherical Coordinates

θ = arccos(z)

φ = cot(yx

) (A.3)

Spherical to Cartesian Coordinates

x = sin(θ) cos(φ)y = sin(θ) sin(φ)z = cos(θ) (A.4)

Normalised Image-Space to Spherical Coordinates

r =√u2 + v2

θ = 2arcsin(r)

φ = arctan( vu

) (A.5)

Cartesian to Normalised Image-Space Coordinates

r =sin( 1

2 arccos(z))2√

(x2 + y2)u = 0.5 + rx

v = 0.5 + ry (A.6)

Page 56: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

Appendix B

GLSL Code

B.1 Spherical Harmonic Rendering for Light ProbeSequences

B.1.1 Vertex Shader

varying vec3 normal ;varying vec4 po s i t i o n ;

void main ( )

g l_Pos i t i on = ftransform ( ) ;normal = gl_Normal ;gl_FrontColor = gl_Color ;gl_TexCoord [ 0 ] = gl_MultiTexCoord0 ;p o s i t i o n = gl_Vertex ;

B.1.2 Fragment Shader

// A l l the program input :uniform sampler2D matrixTex ;uniform int t o t a l ;uniform float s e tSpac ing ;uniform float zero ;uniform float one ;uniform vec3 c o l ou rS ca l e ;uniform float s c a l e ;

// The input from the v e r t e x shader :varying vec4 po s i t i o n ;varying vec3 normal ;

// Some g l o b a l s :const f loat vectorWidth = 0.08333333333333 ; // 1 / 12vec2 midPixe lOf f s e t = vec2 ( vectorWidth ∗ 0 . 5 , s e tSpac ing ∗ 0 . 5 ) ;

47

Page 57: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

48 GLSL Code

// From the i ’ th s e t g e t the component c ’ s matrix// The s e t s are l i s t e d on the he igh t , and each component// matr ices column−v e c t o r s on the width .mat4 getMatrix ( int i , int c )

f loat index = clamp( f loat ( i ) , 0 . 0 , f loat ( t o ta l −1) ) ;f loat component = clamp( f loat ( c ) , 0 . 0 , 2 . 0 ) ;// There are 12 column v e c t o r s per s e t ; 4 per RGB−component ’ s

matrixvec2 matrix = midPixe lOf f s e t + vec2 ( 4 . 0∗ vectorWidth∗ f loat (

component ) , index ∗ s e tSpac ing ) ;vec2 vec_o f f s e t = vec2 ( vectorWidth , 0 . 0 ) ;vec4 v1 = texture2D (matrixTex , matrix ) ;vec4 v2 = texture2D (matrixTex , matrix + 1 .0 ∗ vec_o f f s e t ) ;vec4 v3 = texture2D (matrixTex , matrix + 2 .0 ∗ vec_o f f s e t ) ;vec4 v4 = texture2D (matrixTex , matrix + 3 .0 ∗ vec_o f f s e t ) ;return mat4( v1 , v2 , v3 , v4 ) ;

// Evaluate each co lour channel ’ s matrixvec3 eva luateMatr ixSet ( int i , vec4 n)

vec3 c o l ;mat4 R = getMatrix ( i , 0) ;mat4 G = getMatrix ( i , 1) ;mat4 B = getMatrix ( i , 2) ;c o l . r = dot (n , R ∗ n) ;c o l . g = dot (n , G ∗ n) ;c o l . b = dot (n , B ∗ n) ;return c o l ;

// Evaluate the d i f f u s e co lour at sample po in t p o s i t i o n . z us ing// the normal at t h a t po in t .void main (void )

// must r o t a t e the a x i s and f l i p c e r t a i n components , due to// the d i f f e r e n t coord inate systems in OpenGL to the one// used by the s p h e r i c a l harmonic p r o j e c t i o n .vec4 n = vec4(−normal . x , −normal . z , normal . y , 1 . 0 ) ;int i 1 = int (abs ( f loor ( p o s i t i o n . z ) ) ) ;int i 2 = int (abs ( c e i l ( p o s i t i o n . z ) ) ) ;vec3 co l 1 = eva luateMatr ixSet ( i1 , n ) ;vec3 co l 2 = eva luateMatr ixSet ( i2 , n ) ;// average the c o l o u r s . . .vec3 c o l = mix( co l1 , co l2 , p o s i t i o n . z − f loat ( i 1 ) ) ;f loat mu l t i p l i e r = 1 .0 / ( one − zero ) ;c o l = co l ou rS ca l e ∗ s c a l e ∗ mu l t i p l i e r ∗ ( c o l − vec3 ( ze ro ) ) ;gl_FragColor = vec4 ( co l , 1 . 0 ) ∗ gl_Color ;

Page 58: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

B.2 Down-sample Rendering for Light Probe Sequences 49

B.2 Down-sample Rendering for Light Probe Se-quences

B.2.1 Vertex Shader

varying vec3 normal ;varying vec4 po s i t i o n ;

void main ( )

normal = gl_Normal ;p o s i t i o n = gl_Vertex ;gl_FrontColor = gl_Color ;g l_Pos i t i on = ftransform ( ) ;

B.2.2 Fragment Shader

// A l l the program input :uniform sampler2D sampleTex ;uniform int to ta l Images ;uniform sampler2D d i r e c t i on sTex ;uniform int t o t a lD i r e c t i o n s ;uniform float sampleSca le ;uniform int sampleMin ;uniform int sampleMax ;uniform float zero ;uniform float one ;uniform vec3 c o l ou rS ca l e ;uniform float s c a l e ;

// The input from the v e r t e x shader :varying vec3 normal ;varying vec4 po s i t i o n ;

// Some g l o b a l s :f loat imageSpacing = 1 .0 / f loat ( to ta l Images ) ;f loat d i r e c t i onSpac ing = 1 .0 / f loat ( t o t a lD i r e c t i o n s ) ;f loat maxz = f loat ( sampleMax ) ;f loat minz = f loat ( sampleMin ) ;f loat midPixe lOf f s e t = vec2 ( d i r e c t i onSpac ing , imageSpacing ) ∗ 0 . 5 ;

// Evaluate the co lour at a s p e c i f i c sample po in t ( proj_z )// and in a s p e c i f i c d i r e c t i o n .vec3 eva luate ( f loat proj_z , f loat d i r Index )

proj_z = clamp( proj_z , minz , maxz) ;f loat pz1 = f loor ( proj_z ) ;f loat pz2 = c e i l ( proj_z ) ;vec3 c o l = texture2D ( sampleTex , midPixe lOf f s e t + vec2 ( d i r Index ∗

d i r e c t i onSpac ing , pz1∗ imageSpacing ) ) . rgb ;i f ( pz1 != pz2 )

Page 59: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

50 GLSL Code

vec3 co l 2 = texture2D ( sampleTex , midPixe lOf f s e t + vec2 ( d i r Index ∗d i r e c t i onSpac ing , pz2∗ imageSpacing ) ) . rgb ;

// average the c o l o u r s . . .c o l = mix( co l , co l2 , proj_z − f loat ( pz1 ) ) ;

return c o l ;

// Each fragment must check the c o n t r i b u t i o n o f c o l o u r s from// a l l d i r e c t i o n s and s c a l e the r e s u l t according to the// uniform v a l u e s input to the program .void main (void )

vec3 co l ou r = vec3 ( 0 . 0 , 0 . 0 , 0 . 0 ) ;normal = normalize ( normal ) ;for ( f loat i = 0 . 0 ; i < f loat ( t o t a lD i r e c t i o n s ) ; i +=1.0)

vec3 d i r = texture2D ( d i r ec t ionsTex , midPixe lOf f s e t + vec2 ( i ∗d i r e c t i onSpac ing , 0 . 5 ) ) . xyz ;

f loat d = dot ( normal , d i r ) ;i f (d > 0 . 0 )

// Projec t the d i r e c t i o n down to the sample path ( z−a x i s )f loat denom = dot ( d i r . xy , d i r . xy ) ;f loat u = (denom < 0.0001) ? minz : −dot ( p o s i t i o n . xy , d i r . xy )

/ denom ;vec3 q = po s i t i o n . xyz + u∗ d i r ;co l ou r += eva luate ( q . z ∗ sampleScale , i ) ∗ d ; // d = cos ( t h e t a )

f loat mu l t i p l i e r = 1 .0 / ( one − zero ) ;co l ou r = co l ou rS ca l e ∗ s c a l e ∗ mu l t i p l i e r ∗ ( co l ou r − vec3 ( ze ro ) ) ;gl_FragColor = vec4 ( co lour , 1 . 0 ) ∗ gl_Color ;

Page 60: Spatially Varying IBL Using Light Probe Sequencesliu.diva-portal.org/smash/get/diva2:634059/FULLTEXT01.pdfDepartment of Science and Technology Institutionen för teknik och naturvetenskap

Appendix C

Ray Projection Equation

Given:

r(t) = p + td

∆ =√

rx2 + ry2

pproj = p + (arg min∆(t))d

arg min∆(t) ≡ arg min∆2(t)= arg min(rx2 + ry2)= arg min((px + tdx)2 + (py + tdy)2)= arg min(px2 + 2pxdxt+ dx2t2 + py2 + 2pydyt+ dy2t2)

Minimum when ddt∆

2(t) = 0:

d

dt(px2 + 2pxdxt+ dx2t2 + py2 + 2pydyt+ dy2t2) = 0

2pxdx + 2dx2t + 2pydy + 2dy2t = 02t(dx2 + dy2) = −2(pxdx + pydy)

t = pxdx + pydydx2 + dy2

pproj = p− pxdx + pydydx2 + dy2 · d

zproj = pproj,z

51