Computer Graphics and Visualisation - Lighting and Shading

94
Computer Graphics Unit Manchester Computing University of Manchester Department of Computer Science University of Manchester Computer Graphics and Visualisation Developed by Lighting and Shading Student Notes J Irwin W T Hewitt T L J Howard

Transcript of Computer Graphics and Visualisation - Lighting and Shading

Page 1: Computer Graphics and Visualisation - Lighting and Shading

Computer Graphics UnitManchester Computing

University of Manchester

Department of Computer ScienceUniversity of Manchester

Computer Graphicsand Visualisation

Developed by

Lighting and ShadingStudent Notes

J IrwinW T Hewitt

T L J Howard

Page 2: Computer Graphics and Visualisation - Lighting and Shading

Produced by the ITTI Gravigs ProjectComputer Graphics UnitManchester ComputingMANCHESTER M13 9PL

First published by UCoSDA in October 1995

Copyright The University of Manchester 1995

The moral right of John Irwin, William Terence Hewitt and Toby Leslie John Howard to be identi-fied as the authors of this work is asserted by them in accordance with the Copyright, Designs andPatents Act (1988).

ISBN 1 85889 064 0

The training materials, software and documentation in this module may be copied within thepurchasing institution for the purpose of training their students and staff. For all other purposes,such as the running of courses for profit, please contact the copyright holders.

For further information on this and other modules please contact:The ITTI Gravigs Project, Computer Graphics Unit, Manchester Computing.Tel: 0161 275 6095 Email: [email protected]

To order further copies of this or other modules please contact:Mrs Jean Burgan, UCoSDA.Tel: 0114 272 5248 Email: [email protected]

These materials have been produced as part of the Information Technology Training Initiative,funded by the Information Systems Committee of the Higher Education Funding Councils.

The authors would like to thank Neil Gatenby for his assistance in providing the source of somematerial used in this document.

Printed by the Reprographics Department at Manchester Computing from PostScript sourcesupplied by the authors.

Page 3: Computer Graphics and Visualisation - Lighting and Shading

The University of Manchester i

Table of Contents

1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1

2 Local reflection models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5

2.1 Light sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.1.1 Light emitters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.1.2 Light reflectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.2 Types of reflection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.3 Light source geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.4 Ambient illumination. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.5 Diffuse reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.6 The position of the light source . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.7 Specular reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.7.1 The Phong model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.8 Multiple light sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.9 Colour. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

3 Shading surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17

3.1 Constant shading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3.1.1 Mach banding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3.2 Intensity interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3.2.1 Problems with intensity interpolation . . . . . . . . . . . . 20

3.3 Normal vector interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

3.4 Dot-product interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

4 Texture and transparency . . . . . . . . . . . . . . . . . . . . . . . . . . .23

4.1 Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

4.1.1 Object space mapping . . . . . . . . . . . . . . . . . . . . . . . 23

Page 4: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

ii Computer Graphics and Visualisation

4.1.2 Parametric space mapping . . . . . . . . . . . . . . . . . . . .25

4.1.3 Environment mapping . . . . . . . . . . . . . . . . . . . . . . . . .25

4.2 Roughness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26

4.2.1 Macroscopic roughness . . . . . . . . . . . . . . . . . . . . . . .26

4.2.2 Microscopic roughness . . . . . . . . . . . . . . . . . . . . . . . .28

4.2.3 Phong vs Cook-Torrance . . . . . . . . . . . . . . . . . . . . . . .29

4.3 The optics of transparency . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29

4.4 Modelling transparency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31

5 Ray-tracing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

5.1 Basic algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33

5.2 A simple viewing system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37

5.3 Reflection and refraction vectors. . . . . . . . . . . . . . . . . . . . . . . .38

5.4 Shadows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .40

5.5 The ray-tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .40

5.6 Object intersections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42

5.6.1 Sphere intersection. . . . . . . . . . . . . . . . . . . . . . . . . . . .42

5.6.2 Polygon intersection. . . . . . . . . . . . . . . . . . . . . . . . . . .45

5.7 Acceleration techniques. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47

5.7.1 Hierarchical bounding volumes . . . . . . . . . . . . . . . . .47

5.7.2 3D spatial subdivision . . . . . . . . . . . . . . . . . . . . . . . . . .48

5.8 Image aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .50

5.8.1 Anti-aliasing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .51

5.8.2 Adaptive supersampling . . . . . . . . . . . . . . . . . . . . . . .53

5.9 Example images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .54

6 Classical radiosity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

6.1 The radiosity equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61

6.2 Form-factor calculation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .62

6.2.1 Reciprocity relation . . . . . . . . . . . . . . . . . . . . . . . . . . .63

Page 5: Computer Graphics and Visualisation - Lighting and Shading

The University of Manchester iii

6.3 Solving the radiosity equation . . . . . . . . . . . . . . . . . . . . . . . . . . 63

6.4 The hemi-cube algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

6.4.1 Calculating the delta factors . . . . . . . . . . . . . . . . . . 66

6.5 Rendering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

6.5.1 Light and shadow leaks . . . . . . . . . . . . . . . . . . . . . . . 69

6.6 Progressive refinement radiosity . . . . . . . . . . . . . . . . . . . . . . . . 69

6.6.1 Formulation of the method . . . . . . . . . . . . . . . . . . . . 70

6.6.2 Ambient contribution . . . . . . . . . . . . . . . . . . . . . . . . . 71

6.7 Improving the radiosity solution. . . . . . . . . . . . . . . . . . . . . . . . . 72

6.7.1 Substructuring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

6.7.2 Adaptive subdivision . . . . . . . . . . . . . . . . . . . . . . . . . 73

6.8 Example images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

A Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .81

A.1 What is a vector? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

A.2 Vector addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

A.3 Vector multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

A.4 Determining the surface normal. . . . . . . . . . . . . . . . . . . . . . . . 82

B Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .85

B.1 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

B.2 Research literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

B.3 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

Page 6: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

iv Computer Graphics and Visualisation

Page 7: Computer Graphics and Visualisation - Lighting and Shading

The University of Manchester 1

1 Introduction

We see objects only by virtue of the light they reflect into our eyes. Additionally,the way in which the light is reflected from objects gives us important informationabout their three-dimensional shape. One of the first things painters learn is thepower of light and shade to make pictures look realistic. Similarly, in the quest forrealism in computer graphics, we must consider the effects of illumination on theobjects we model.

Ultimately, we draw pictures by displaying collections of pixels with the appropri-ate colours and intensities. We need methods for determining exactly what thesevalues should be.

A previous module in this series, Colour in Computer Graphics, considered how todetermine the colour of a light source from its emission spectrum. We also lookedat the colour of light reflected from an object, considering the emission spectrum ofthe light and the reflectance spectrum of the object. The illuminant was shown tobe important – table lamps illuminate their surroundings quite differently fromfluorescent strip-lamps, for example.

This was sufficient to define the intrinsic colour of a single, isolated object. Whenrendering a scene, however, we must consider many other factors, including:

• The geometrical arrangement of the light, the object and the viewer. Expe-rience tells us that at certain angles, we see reflected glare from the lightsource and this is not the same colour as the object. So the reflectance spec-trum is geometry dependent. Also, surfaces directly facing a light sourcewill appear brighter than surfaces inclined at angles.

• The geometry of the illuminant - is it a point or an area, how far away is itfrom each object.

• The physical nature of the surface – is it shiny, matte, coloured, patterned,transparent, smooth or bumpy?

• The effect of other surfaces in the scene.

• The medium through which the light passes – is it smoky, foggy, underwa-ter or raining for example.

In this module we will look at techniques for quantifying these rather vague ideas,so we can compute how to draw realistic pictures. One thing to bear in mind rightat the start is that there is no single method which gives the 'right' result all of thetime. Rather, it is a case of making a model of the effect of light on objects. Thecloseness of fit of this model to physical reality depends on what phenomena wechoose to model and how much effort we are prepared to expend for a particulareffect. The trick is to use a model that fits well enough for the particular purpose.The amount of effort that this requires varies greatly. A simple 3D previewerrequires considerably less computational resources than an architectural lightingsimulation, for example.

Page 8: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

2 Computer Graphics and Visualisation

Figure 1 illustrates that image synthesis is a combination of first defining what toput in the picture, and then how to display it. These two stages are often knownrespectively as modelling and rendering.

Figure 1: The rendering pipeline.

To a large extent, the two steps are quite independent. Given a particular geomet-rical description, we can render an image of it in many ways. These notes concen-trate on illumination modelling and rendering; geometric modelling is onlydiscussed where it affects rendering.

During the process of rendering we must consider the following distinctions:

• The illumination of objects by light sources (of whatever origin)

• The reflection of light from objects

• The shading of objects (for display on a graphics screen)

Concerning the first point, we have to model the geometry of the light source, bothits physical dimensions as well as its intensity distribution, together with its spec-tral distribution, i.e. what colour it is. For the second point, we need to considerhow incident light is modified by a particular surface, especially the colour of thelight. We do this by defining a light reflection model and there are many ways todo this. The third point is necessary since in most cases we only compute the lightleaving a few specific points on a surface and use various interpolation techniquesto find the light leaving the surface at other locations.

There are two ways of computing illumination effects: local illumination modelsand global illumination models.

Local models consider each object in a scene in isolation, without reference toother objects in the scene. Scenes rendered in this way therefore look artificial and

Render

Hiddensurfaceremoval

PolygonsCurvesSurfacesetc.

LightsPatternsTextureetc.

Define the geometry

Pictures

Page 9: Computer Graphics and Visualisation - Lighting and Shading

Introduction

The University of Manchester 3

contain no shadows or reflection effects. Because they are simple, and the time torender one object does not depend on the overall scene complexity, local modelsare fast. Also, this simplicity allows local models to be implemented in hardwareon graphics workstations which gives further speed benefits. Local models are asuitable choice when the purpose of rendering is to give basic information aboutthe 3D shape of an object. An example would be the display of a molecule made ofthousands of coloured spheres. All that is needed is an indication of shape, and itis important that rendering is fast so that the molecule may be rotated and exam-ined in real time.

In contrast, global models consider the relationships between objects in a sceneand can produce more realistic images incorporating shadows, reflection, andrefraction. Global models are used when a realistic portrayal of some aspect ofphysical reality is desired, and when this realism is more important than thespeed of rendering. In general, the more realistic the lighting model, the greaterthe computational cost.

In the next chapter we will develop a simple local reflection model which takesinto account the following:

• Illumination from a point light source

• Illumination from ambient light

• Diffusely reflected light

• Specularly reflected light

In subsequent chapters we will see how this model can be used to produce shadedimages of objects and how the model can be refined to include effects due to sur-face finish, such as texturing and bump mapping, and also transparency.

Following this we will turn our attention to the problem of modelling the globalreflection of light. This will cover a basic discussion of two of the most populartechniques known as ray-tracing and radiosity, which actually model two quitedifferent aspects of light reflection.

Page 10: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

4 Computer Graphics and Visualisation

Page 11: Computer Graphics and Visualisation - Lighting and Shading

The University of Manchester 5

2 Local reflection models

2.1 Light sourcesThere are two basic kinds of illumination source: emitters and reflectors.

2.1.1 Light emitters

Anything that gives off light, such as the sun, a light bulb, or a fluorescent tube isan emitter. Characteristics of emitters include the overall intensity of the lightand its spectrum, which determines the colour of the light. The spectra of somecommon light sources are shown in Figure 2.

Figure 2: Spectra of sunlight and a light bulb.

We can further classify light emitters into two types. When the light source is suf-ficiently far away from the surface, we can consider it to be an infinite pointsource, and all the light rays effectively strike a surface at the same angle. But,when the light source is near to the surface rays will arrive at different points onthe surface at different angles, as illustrated in Figure 3.

300.0 400.0 500.0 600.0 700.0 800.0Wavelength (nm)

0.0

100.0

200.0

300.0

Rel

ativ

e in

tens

ity

Illuminant A (incandescent lightbulb)Illuminant D65 (sunlight)

Page 12: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

6 Computer Graphics and Visualisation

Figure 3: Distant and nearby light sources.

We may also wish to take the geometry of the light source into account. In particu-lar, we can classify emitter geometry into two types: point sources and areasources. With area light sources, a single point on an object can be illuminatedfrom several different angles from the same light source, as shown in Figure 4.

Figure 4: Point and area light sources.

As any physical light source must take up a certain amount of space, all lightsources are area sources! However, considering some small area sources to bepoint sources gives a significant reduction in complexity and is an acceptable com-promise in many instances.

2.1.2 Light reflectors

Turn a spotlight on in one corner of a room, and parts of the room not directly litby the spotlight will still be illuminated by light reflected off walls, ceiling, car-pets, people, and so on. So light reflectors are all the other objects in a scene thatare not emitters. Although clearly an emitter can also reflect light, the amountreflected is so small in comparison with the emitted light that the reflected portionis frequently ignored. The colour of a light reflector depends on the spectrum ofthe incident light and the reflectance spectrum of the reflective object.

Point source at infinityArea source nearby

Area source

Point source

Page 13: Computer Graphics and Visualisation - Lighting and Shading

Local reflection models

The University of Manchester 7

In addition to these direct reflections, there will be indirect ones - reflections ofreflections, and so on. These multiple reflections in a scene produce an overall illu-mination called ambient light, which is discussed further below.

2.2 Types of reflectionWe need to develop a model to enable us to calculate how much light is reflectedfrom different objects in a scene. There are two ways in which reflection occurs:

• Diffuse reflection – light is scattered uniformly from the surface of theobject (Figure 5a), and the surface looks dull. The surface is effectivelyabsorbing the illumination and re-radiating it only at preferred wave-lengths. For example, a green object looks green when illuminated withwhite light because it absorbs all wavelengths except green, which itreflects.

• Specular reflection – a high proportion of the incident light is reflected overa fairly narrow range of angles, and the object looks shiny (Figure 5b). To afirst approximation at least, the colour of the surface does not affect the col-our of the specularly reflected light.

In practice, many materials exhibit both of these effects to different degrees, andthe total reflection is of the kind shown in Figure 5c.

Figure 5: Diffuse, specular and combined reflections

2.3 Light source geometryConsider a single light source, modelled as a point infinitely distant from theobject being lit. Clearly, the amount of light reflected from a surface will depend

(a) (b)

(c)

Page 14: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

8 Computer Graphics and Visualisation

on its orientation to the source of light. We describe this quantitatively with twovectors, shown in Figure 6 (refer to Appendix A if you need a refresher on vectors)

• The normal vector N - this is a vector normal (at right angles to) the sur-face. For example, for a surface lying in the xz-plane, its normal vector willbe parallel to the y-axis.

• The light vector L – this points towards the light source.

Figure 6: Describing a surface and its orientation.

Since only the directions of these vectors are required to orient the surface withrespect to the light source, it is convenient (and conventional) to use unit vectors,so we will assume that N and L have been normalised. The angle θ between N andL is called the angle of incidence.

2.4 Ambient illuminationMultiple reflections of light in a scene together produce a uniform illuminationcalled ambient light. Actually, this is an over-simplification. For many purposes,we can assume a constant level of ambient light in a scene. In reality, however,ambient light effects are much more subtle.

The intensity I of diffuse reflection at a point on an object illuminated only byambient light of intensity Ia is:

(2-1)

where kd is called the diffuse reflection coefficient and is a constant in the range 0to 1. Its value varies from surface to surface according to the surface properties ofthe material in question. A highly reflective surface will have kd approaching 1,and a surface that absorbs most of the light falling onto it will have kd near 0.Note that some researchers refer to this term as ka.

We could use Equation 2-1 to compute the illumination of an object, but each pointon the object would be assigned the same intensity. This is not realistic, as weknow that in practice the shading of objects is not constant. If this were the casewe would not be able to perceive the 'three-dimensionality' of the object. In Figure7 we illustrate how a cube would appear if it was illuminated solely by ambientlighting. We achieve a more realistic effect by considering localised light sources

Normal vectorN

To light source

L

θ

I kdIa=

Page 15: Computer Graphics and Visualisation - Lighting and Shading

Local reflection models

The University of Manchester 9

(such as points sources) and the orientation of surfaces with respect to lightsources.

Figure 7: Object illuminated only by ambient light

2.5 Diffuse reflectionThe amount of light reflected by the surface depends on the amount received fromthe source of illumination, which in turn depends on the intensity of the illumina-tion, and the orientation of the surface with respect to it. If N and L are in thesame direction, (where θ = 0°) reflection will be at a maximum. If they are at rightangles (where θ = 90°), reflection will be at a minimum. The precise relationship isexpressed by Lambert’s Cosine Law, which states that the effective intensity Ieincident on a surface when illuminated by a light of intensity Ip incident at θ is:

Expressing the diffuse reflectivity of the surface by kd, the amount of light I dif-fusely reflected by a surface illuminated by a light of intensity Ip incident at θ is:

(2-2)

Since N and L are normalised, their dot product gives the angle between them:

so we can rewrite Equation 2-2 entirely in terms of vectors as:

(2-3)

We can now combine the effects of ambient illumination and a point light sourceto give the illumination model:

or,

(2-4)

Ie Ip θcos=

I kdIp θcos=

θcos N L•=

I kdIp N L•( )=

I ambient term diffuse term+=

I kdIa kdIp N L•( )+=

Page 16: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

10 Computer Graphics and Visualisation

Shown in Figure 8 are four spheres whose surfaces have been shaded using thediffuse reflection model given by Equation 2-3. From left to right and then top tobottom, kd = 0.1, 0.4, 0.7 and 0.95.

Figure 8: Spheres shaded using diffuse reflection model

Shown in Figure 9 are four spheres again, but this time the surfaces are shadedaccording to Equation 2-4 which includes an ambient lighting term. Each spherehas kd = 0.4, but Ia = 0%, 20%, 40% and 60% of Ip respectively as we move fromleft to right and then from top to bottom.

Figure 9: Spheres shaded using ambient and diffuse reflection

Page 17: Computer Graphics and Visualisation - Lighting and Shading

Local reflection models

The University of Manchester 11

Notice that in Equation 2-4 the intensity of light reflected from a surface is inde-pendent of the distance of the point source from the object. This means if twoobjects have parallel surfaces, and seen from the viewpoint the surfaces overlap,they will be shaded the same and will merge into one. Also, note that neither thecolour of the illuminating light nor that of the surface is taken into account in thismodel. We will come on to this shortly.

2.6 The position of the light sourceThis can be modelled by observing that the intensity of light falls off in inverseproportion to the square of the distance the light has travelled. If light from asource of intensity Ip travels a distance d, its effective intensity is Ip/4πd2. We canincorporate this into Equation 2-4 to give:

(Note the ambient term kdIa is not dependent on distance, since by definition thesource is everywhere). However this model does not give realistic results in prac-tice. In typical modelling applications, we will either have a point light source atinfinity, so that d is infinite, or we have a light source at a finite position. Eitherway, we will typically have values of d that differ little between objects, and this,in conjunction with the finite values available to assign to pixels, may mean thatthe effect of the d2 term in the model may be too great. Experiments have shownthat a better model is to use a linear approximation for fall-off of intensity withdistance, using:

(2-5)

where d is the actual distance from the light source to the object, and d0 is a con-stant determined experimentally. (There are lot of 'constants determined experi-mentally' in computer graphics.)

2.7 Specular reflectionAn object modelled using only diffuse reflection appears dull and matte. Howevermany real surfaces are shiny. At certain viewing angles, a shiny surface reflects ahigh proportion of the incident light – this is called specular reflection. When thishappens you see a bright spot on the object the same colour as the illuminatinglight.

First we consider a perfect reflector such as a highly-polished mirror. In Figure 10,the unit vector R represents the direction of specular reflection. V is the directionfrom which the surface is being viewed. For a perfect reflector, the angle of inci-dence equals the angle of reflection, so the specular reflection can only be observedwhen V and R coincide, such that φ = 0°.

Real materials, however, behave less exactly. Light arriving at an angle of inci-dence of θ is reflected in a range of angles centred on θ, where the intensity is at amaximum. The intensity of the specular reflection falls off rapidly as the angle φ

I kdIa

kdIp N L•( )

4πd2

----------------------------------+=

I kdIa

kdIp N L•( )d d0+( )

----------------------------------+=

Page 18: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

12 Computer Graphics and Visualisation

between the direction of maximum ideal reflection R, and the viewing direction V,increases. We need to somehow incorporate this effect into our illumination model.

Figure 10: The four reference vectors.

2.7.1 The Phong model

Phong developed an empirically based model where the intensity of specularreflection is proportional to cosnφ. Figure 11 graphs cosnφ for -π/2 ≤ φ ≤ π/2. We usen to represent the surface properties, and is typically in the range 1 to 200. For aperfect reflector, n = ∞; for a very poor specular reflector, like foam, n = 1.

Figure 11: The behaviour of cosnφ.

For real materials, the amount of light specularly reflected also depends on theangle of incidence θ. In general, the intensity of the specular reflection increasesas the angle of incidence increases. Exactly how the two are related depends onindividual surface properties, and we represent this with the specular reflectionfunction W(θ) which has a value in the range 0 to 1. The precise behaviour of W(θ)will vary from surface to surface. Figure 12 shows how reflection depends on thewavelength of incident light and the angle of incidence for several materials. Forglass, for example, the specular reflection increases from almost nothing to fullintensity as the angle of incidence increases from 0° to 90°. Other materials show

Normal vectorN

To light source

LReflection vector

R

View vectorV

θφ

1.0

π/2−π/2

n = 1

n = 5

n = 50

Page 19: Computer Graphics and Visualisation - Lighting and Shading

Local reflection models

The University of Manchester 13

much less variation, and W(θ) is often replaced by a constant value ks selectedexperimentally. This constant, ks, is called the specular reflection coefficient.

Figure 12: Reflection curves.

We can now incorporate specular reflection into our illumination model of Equa-tion 2-5, by adding the specular reflection component to the diffuse and ambientreflections:

or,

(2-6)

Since R and V are normalised, and we can rewrite Equation 2-6entirely in terms of vectors:

(2-7)

Figure 13: Spheres shaded using Phong’s model

Steel

AgAl Silver

Gold

Dielectric

2 4 6 80

50

100

030 60 90

50

100

Wavelength, λ (x10-3nm)

Refl

ecta

nce

(%

)

Angle of incidence, θ°

I ambient term diffuse term specular term+ +=

I kdIa

Ipd d0+---------------- kd N L•( ) ks φncos++=

φcos R V•=

I kdIa

Ipd d0+---------------- kd N L•( ) ks R V•( )n++=

Page 20: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

14 Computer Graphics and Visualisation

Figure 13 shows a sphere illuminated using Equation 2-7 with different values ofks and n. In each case kd = 0.4, Ia = 20% of Ip. The top row has ks = 0.25 while thebottom has ks = 0.5. Moving from left to right n = 3, 10 and 100 respectively.

2.8 Multiple light sourcesExtending Equation 2-7 to include more than one source of illumination isstraightforward. Each source will cause diffuse and specular reflection at the sur-face, so we need to sum these effects. If there are n different point sources, ourmodel becomes:

This summation, however, can cause problems with overflow in the value of I,although the problem can also occur with a single light source. For display pur-poses we require the range of I to be limited, usually 0 to 1 in normalised form.There are a number of ways to deal with overflow. First we could avoid the situa-tion arising in the first place by adjusting the reflection parameters or the relativelight sources strengths. If we decide this is not such a good idea then adjusting thecomputed value of I is necessary. In this case we can either

• adjust the value locally by, for example, clamping the value to the requiredrange, or

• adjust the value globally by considering the full range of intensities calcu-lated for the image and scaling everything at the same time so that no val-ues are out of range.

The second method is generally preferred which avoids limit errors leading toMach banding in the image.

2.9 ColourSo far we have mainly assumed a monochromatic light source, and have ignoredthe fact that objects are coloured. We express colour using a colour model, whichgives us a way of assigning numerical values to colours. A common simple modelis the Red-Green-Blue (RGB) model, where a colour is represented by a mixture ofthe three primary colours red, green and blue corresponding to the colours of themonitor phosphors. This is illustrated in Figure 14.

To consider the effect of coloured illumination and coloured objects, we must applyour illumination model to each component of the colour model we are using. Forexample, if we are using the RGB colour model, we would treat each of the red,green and blue components of the colour separately

We can take the surface (diffuse) colour of objects into consideration by expressingthe diffuse reflection coefficient as a colour dependent parameter (kdR, kdG, kdB).We also have colour components for the illumination; the red component of Ip isIpR, and so on. So, for the red component our model becomes:

I ambient term diffuse termi specular termii 1=

n

∑+i 1=

n

∑+=

IR kdRIaR

IpRd d0+---------------- kdR N L•( ) ks R V•( )n++=

Page 21: Computer Graphics and Visualisation - Lighting and Shading

Local reflection models

The University of Manchester 15

Figure 14: The RGB colour model.

and similarly for green and blue. We are assuming that the specular highlight willbe the same colour as the light source, hence ks is not colour dependent. Once wehave the three intensities IR, IG and IB, and each component is within the displayrange, we can use these to directly drive the RGB monitor to display the requiredcolour. Out of range values are treated, in ways similar to those mentioned before,by clamping or global scaling. We could also locally scale the colour by dividingeach component by the value of the largest component, a method which preservesthe hue and saturation of the colour but modifies its lightness.

Although using the RGB colour model often provides reasonable results, it is agross over-simplification of the interactions between light and objects. For properresults, it is necessary to apply the illumination model with many more thanthree samples across the spectrum. Instead of restricting ourselves to a three-component colour model, a more general form of the illumination model may bewritten:

where any terms which are wavelength-dependent are indicated with the sub-script λ. But how do we use these spectral samples to specify an RGB triplet,which we must have to be able to display the colour on a graphics screen? Thestraightforward way of doing this is to convert the samples to the CIE XYZ colourspace and then to the monitor RGB space, which requires knowledge of the chro-maticities of the monitor phosphors. Further details can be found in the accompa-nying module, Colour in Computer Graphics.

Although this spectral sampling technique is physically more correct, we mustremember that the reflection models discussed thus far are mainly based onempirical observation: they are constructed such that the results look reasonable.

R

G

B

Red (1, 0, 0)

Green (0, 1, 0)

Blue (0, 0, 1)

Cyan (0, 1, 1)

Magenta (1, 0, 1)

Yellow (1, 1, 0)

White (1, 1, 1)

Shades of grey along diagonal

Black (0, 0, 0)

Iλ kdλIaλ

Ipλd d0+---------------- kdλ N L•( ) ks R V•( )n++=

Page 22: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

16 Computer Graphics and Visualisation

So in general it would seem inappropriate to use such a method. However, whenwe come to discuss global models of light reflection we will see that these modelsare more physically based and would benefit from a more considered use of spec-tral sampling. Unfortunately, while this is true, it is still the custom of the major-ity of practitioners in the field to use the RGB colour model in their renderingsystems.

Page 23: Computer Graphics and Visualisation - Lighting and Shading

The University of Manchester 17

3 Shading surfaces

The illumination models we have developed so far can calculate for us the inten-sity and colour of the light leaving a particular surface at one point. How do wecompute intensities across a whole object?

Curved surfaces and complex objects are often approximated using meshes of pol-ygons, as illustrated by Figure 15.

Figure 15: A polygonal model.

We can apply our illumination model to each polygonal facet, but there are severalways to do this: we could compute a single intensity for each polygon, based, forexample, on the centre point of the polygon. Or we could apply the shading modelto each point in a polygon, so that the intensity varies across the polygon. The lat-ter approach will give a highly realistic effect, but it is very expensive. Instead wecould compute the intensity at a few key points on the polygon, and determine theintensity of all the other points using interpolation methods. We now examineeach of these methods.

Page 24: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

18 Computer Graphics and Visualisation

3.1 Constant shadingConstant shading appears unrealistic when two or more polygons share a commonboundary, and are each shaded differently. For some applications, such as finiteelement modelling, this effect may be required. But for realistic modelling, theharsh change of illumination at the boundary is distracting and unrealistic. Addi-tionally, with constant shading each polygon approximating the surface is visibleas an entity in its own right, although it is possible to reduce this effect by usingmore, smaller, polygons to approximate an area.

3.1.1 Mach banding

The human visual system is especially good at detecting edges – even when theyare not there. Abrupt differences in shading between two facets are perceived tobe even greater than they actually are, a phenomenon called the Mach bandeffect. In another module in this series, Colour in Computer Graphics, we sawthat this is due to local contrast enhancement by the horizontal cells in the retinaof the eye.

Figure 16 shows (greatly exaggerated for illustration) how the perceived intensitydiffers from the actual measured intensity across the surface of an object.

Figure 16: The Mach band effect.

To reduce the Mach band effect, and eliminate the obvious intensity changesbetween facets, we need to smooth out the changes of intensity, so that one facetanticipates the shading of the next. We shall look at three techniques: intensityinterpolation, normal-vector interpolation and dot-product interpolation.

Actual intensity

Perceived intensity

Inte

nsi

ty

Distance along surface

Page 25: Computer Graphics and Visualisation - Lighting and Shading

Shading surfaces

The University of Manchester 19

3.2 Intensity interpolationIn this method, intensity values are computed for the vertices of a polygon, andthese are then linearly interpolated across the polygon to determine the intensityvalues to be used for the intervening points. The method is also called Gouraudshading, after its inventor Henri Gouraud, who developed the technique in 1971.

Figure 17: Computing vertex normals.

First we must determine the intensities at the vertices of a polygon. To computethe intensity at a vertex using our illumination model, we need the vertex normal.Just using the normal for the polygon is no good, because we need to consider theorientations of adjacent polygons to avoid discontinuities at the shared edges.What we really need is the surface normal at that point for the underlying surfacebeing approximated with polygons. We can approximate the surface normal at avertex by averaging the true surface normals of all the polygons which share thevertex (Figure 17).

Consider the polygon shown in Figure 18. The intensity IQ at Q is interpolatedfrom the intensities IA at A and IB at B using:

similarly for point R,

Now we can interpolate between IQ and IR to obtain the intensity at P:

N

N

N

N

N

1

2

3

4

IQ uIB 1 u–( ) IA+= where u AQAB---------=

IR wIB 1 w–( ) IC+= where w CRCB---------=

IP vIR 1 v–( ) IQ+= where v QPQR---------=

Page 26: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

20 Computer Graphics and Visualisation

Figure 18: Intensity Interpolation.

For each scanline, we can speed things up considerably if we use incremental cal-culations. For two pixels p1and p2 at v1 and v2 respectively on the scanline, wehave

and

Subtracting from gives

Writing (IR - IQ) as ∆I and (v2 - v1) as ∆v, we have

We need only compute ∆I and ∆v once per scanline, reducing the computation perpixel to one addition.

We will no longer have a shading discontinuity for adjacent polygons. The inten-sity arrived at by interpolation at one edge will be the starting point for the inter-polation across the next polygon.

3.2.1 Problems with intensity interpolation

But what if we really want an edge to be visible? The method described above'averages out' edges so they disappear. The answer is to take special action when atrue edge is desired: we compute two vertex normals at the boundary, one foreither side, each found by averaging the surface normals of the polygons on theappropriate side of the edge.

A

C

QRP

B

Ip2v2IR 1 v2–( ) IQ+=

Ip1v1IR 1 v1–( ) IQ+=

Ip1Ip2

Ip2Ip1

IR IQ–( ) v2 v1–( )+=

Ip2Ip1

∆I∆v+=

Page 27: Computer Graphics and Visualisation - Lighting and Shading

Shading surfaces

The University of Manchester 21

Another problem is shown by the corrugated surface of Figure 19a.

Figure 19: Intensity interpolation problems

Here the arrangement of adjacent facets is such that the averaged vertex normalsna, nb and nc are all parallel. So each facet will have the same shading, and thesurface will incorrectly appear to be flat across the facets. One solution is to intro-duce extra polygons, as shown in Figure 19b.

Intensity interpolation doesn’t eliminate Mach bands completely, but the resultsare far more realistic than constant shading. The interpolation technique can alsodistort highlights due to specular reflection.

Another problem arises when a specular highlight lies inside a single polygon – itmay be smoothed out and not appear.

3.3 Normal vector interpolationThis technique uses a similar interpolation method, except that the quantityinterpolated across the polygon is the surface normal vector, instead of the com-puted intensity itself. (This method is also known as Phong interpolation.) Thevertex normals are computed just as in the intensity interpolation method. Refer-ring to Figure 18 again, and representing the normal vector at a point P as nP, wehave

(a)

(b)

na

nb

nc

Surface normals

Vertex normals

Vertex normals

Page 28: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

22 Computer Graphics and Visualisation

where u, v and w are the same as before. The normal vector can also be deter-mined incrementally.

The result is much more realistic than the intensity interpolation method, espe-cially when specular reflection is considered – the highlights are rendered morefaithfully, and Mach banding is greatly reduced (but spheres and cylinders arestill prone to Mach band effects).

The drawback of this method is that it requires more computations to processeach interpolated point in a polygon; at each point the complete shading calcula-tion must be performed using the interpolated normal at that point.

3.4 Dot-product interpolationThis method is a compromise between intensity interpolation and normal vectorinterpolation. Here, the quantities interpolated from one end of the scan line tothe other are the dot products and . This method is sometimescalled 'cheap Phong' interpolation.

3.5 SummaryWe have derived a simple local illumination model that takes into account the fol-lowing:

• Direct diffuse reflections from point sources.

• Direct specular reflections from point sources.

Not taken into account are:

• Area light sources

• Indirect reflections:

- Specular to specular

- Specular to diffuse

- Diffuse to specular

- Diffuse to diffuse

Instead, indirect diffuse reflections are modelled as ambient light. Our model cancope with polygonal geometries, interpolating the surface normals to simulate theappearance of a curved surface. Wavelength dependencies are modelled by takingthree wavelength samples, one each in the red, green and blue areas of the spec-trum, and mapping the intensities directly to the drive signals for a colour moni-tor.

Now, we will look at ways of improving this model to simulate particular effectssuch as transparency, or to more realistically model some of the factors that weresimplified or omitted in the local model.

nQ unB 1 u–( ) nA+=

nR wnB 1 w–( ) nC+=

nP vnR 1 v–( ) nQ+=

N L• R V•( )n

Page 29: Computer Graphics and Visualisation - Lighting and Shading

The University of Manchester 23

4 Texture and transparency

The shading methods we have discussed so far render smooth surfaces lit by oneor more light sources. We now consider how to model surfaces which are notsmooth or simply coloured.

There are two kinds of surface texture to consider.

• Patterns, or colour detail – We superimpose a pattern on a smooth surface,but the surface remains smooth. For example, we could simply model aplanet by wrapping a map around a sphere, and rendering the compositeobject. Adding patterns to a surface is essentially a mapping function andthe process is called pattern mapping.

• Roughness – Most natural objects are not smooth. Their surfaces havesome kind of micro-structure. For example, we could model an orangeusing a sphere of the appropriate colour. Even with a realistic shadingmodel, it won’t look much like a real orange; the reason is that the surfaceof an orange is not smooth. In modelling terms, we must alter the uniform-ity of a surface using a perturbation function that effectively changes thegeometry of the surface.

4.1 Patterns

4.1.1 Object space mapping

To provide colour detail we can superimpose a pattern, stored as an image file onthe computer, onto a smooth surface. Both the pattern and the object surface aredefined in their particular local coordinate system and texturing is defined by amapping function between these two systems. Usually the pattern space is speci-fied using (u, v) coordinates and if the object space coordinates are (θ, φ) then wedetermine the mapping function as

(4-1)

Once this is done we can compute the (u, v) coordinates of a point in pattern spacecorresponding to a point on the object’s surface we wish to shade. The colour corre-sponding to these coordinates is the colour we apply to the surface.

To illustrate the determination of the mapping function we will now derive themapping for a sphere. The determination requires knowledge of the normal vectorN at the point of interest on the sphere’s surface and the process is equivalent tofinding the latitude and longnitude.

The local coordinate system attached to the sphere (origin at sphere’s centre) isdefined by two unit vectors, Tp and Te , which point from the origin to the northpole and from the origin to a reference point on the equator, respectively. Wedefine the mapping (see Figure 20) such that the u-coordinate varies from 0 to 1along the equator and the v-coordinate varies also from 0 to 1 but from the southpole to the north pole. We put u equal to 0 at the poles.

u f θ φ,( ) v g θ φ,( )= =

Page 30: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

24 Computer Graphics and Visualisation

Figure 20: Mapping function for a sphere

With these definitions we obtain the v-coordinate as

(4-2)

where φ is a latitudinal parameter given by

(4-3)

If v is equal to 0 or 1 then u is defined to be zero. If not, then the u-coordinate iscalculated as follows

(4-4)

However, if the quantity

(4-5)

is less than zero we must use u → u − 1. This test is needed to decide which side ofTe the point lies on. That is all there is to the calculation. With the u- and v-coordi-nates we pick out the colour from the image file containing the pattern and applythis to the appropriate point on the sphere.

In Figure 21 we can see an example of pattern mapping onto a sphere. The pat-tern consists of a regular array of squares of random lightness and the mappingonto the surface of a sphere produces the distortions shown on the right of the fig-ure. Note the concentration around the pole and the stretching around the equa-tor.

uvTe

Tp

N1.0

1.00.0 u

v

v φπ---=

φ Tp N•–( )1–cos=

uTe N•( ) φsin⁄[ ]1–cos

2π--------------------------------------------------------------=

Tp Te×( ) N•

Page 31: Computer Graphics and Visualisation - Lighting and Shading

Texture and transparency

The University of Manchester 25

Figure 21: Mapping a regular pattern onto a sphere

4.1.2 Parametric space mapping

This is similar to object space mapping in that we need to determine a mappingfrom pattern space to surface space, but the surface in this case is defined para-metrically (see the module on Curves and Surfaces).

4.1.3 Environment mapping

This mapping technique is an efficient method of simulating the mirror-like reflec-tion of an environment in an object. It is popularly seen in feature films, such asThe Abyss or Terminator 2, which have mixed 3D computer graphics with livecamera footage.

The environment mapping technique relies upon the existence of an image, or ofimages, which depict the view of the scene surrounding the object. To see how thismapping works, imagine placing a cube around the object and photographing thescene through each of its six faces. These six images constitute the environmentmap. During rendering, when we come to colour the object’s surface at a particularpoint, we construct a reflection vector at this point relative to the correspondingview vector (see Figure 10). We then find where on the surface of the cube thereflection intersects and thence where in the image maps this vector points to.The colour in the image map corresponding to this intersection is the colour weapply to the surface of the object.

The image maps are, in this case, just perspective projections of the scene onto thesix faces of the cube. However, since each view is 90° away from its nearest neigh-bours, significant distortions occur in those parts of the image maps correspond-ing to each edge of the cube, which can appear un-natural in a rendering of theobject’s surface. In most cases, this limitation can be tolerated, especially forobjects with a complex shape.

Page 32: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

26 Computer Graphics and Visualisation

4.2 RoughnessThere are various scales of roughness, each producing a different effect.

• Macroscopic roughness may be so large that it affects the geometry, andthe surface is visibly not flat. Examples of this scale of roughness includethe texture of an orange, the tread of a tyre, and so on.

• Much smaller micro scale roughness gives a surface that appears smooth,but does not function as a perfect mirror. Examples include common mate-rials such as paper, painted walls, smooth wood, and so on. While small ingeometric terms, the roughness is still large compared to the wavelength ofthe incident light.

• Finally, nano scale roughness is of a similar magnitude to the wavelengthof light and produces a range of interference colours. These effects arerarely modelled in computer graphics.

4.2.1 Macroscopic roughness

Most natural objects are not smooth; their surfaces have some kind of micro-struc-ture. For example, we could model an orange using a sphere of the appropriatecolour. Even with a realistic shading model, it won’t look much like a real orange;the reason is that the surface of an orange is not smooth.

One approach is to model the geometry in a more complex way. In some cases, thisis acceptable. For example, to model a sphere with four large spikes, the geometricform would be altered.To generate a realistic orange, however, with thousands ofbumps and pits, would be very complicated. The resulting object would also bevery slow to render.

An alternative that is often chosen is to simulate a more complex geometry at therendering stage, by altering the uniformity of a surface using a perturbation func-tion that effectively changes the geometry of the surface.

A common technique for modelling surfaces which are not smooth is to perturb thenormal vector at each point on the surface, using a wrinkle function, before com-puting the intensity of the point using an illumination model.

Figure 22: Bump mapping

Page 33: Computer Graphics and Visualisation - Lighting and Shading

Texture and transparency

The University of Manchester 27

A rough surface results if a small random component is added. More regular fea-tures can be produced if the perturbation of the normal vector is a function of itsposition on the surface. For example, a surface can be made 'wavy' by perturbingthe surface normal using a sine function. This is also called bump mapping and isshown in Figure 22.

For a surface Q defined parametrically as Q(u, v) we can define a new (rough) sur-face Q'(u, v) by adding a wrinkle function f(u, v) to Q at each point in the directionof the original normal at that point. So for a point q on the surface, the new per-turbed point q' is:

where nq is the unit normal vector at q.

Figure 23 illustrates a torus whose surface has been made bumpy with this tech-nique. The perturbation function can be defined as an analytic function, or a set ofvalues stored in a lookup table. Defining values for an entire surface wouldrequire a very large table, so a common approach is to store a small number ofquite widely spaced values, and to interpolate between them for the intermediatepoints.

Note that this technique avoids explicitly modelling the geometry of the new,roughened, surface. This has an important visible effect, if you look at the silhou-ette of a surface roughened in this way, it is still smooth. The roughening onlybecomes apparent when the shading model is applied.

Figure 23: Example of bump mapping

q' q f q( ) nq+=

Page 34: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

28 Computer Graphics and Visualisation

4.2.2 Microscopic roughness

The Cook-Torrance model was developed by illumination engineers and describessurfaces whose microscopic roughness is large compared to the wavelength of theincident light.

A reflecting surface is modelled as a collection of randomly oriented microscopicfacets, each of which is a perfect reflector. Figure 24 shows a collection of microfac-ets, and we are only concerned with those facets which reflect light to the viewer.In the figure, these are facets a, c and e.

Figure 24: Modelling a surface with microfacets

The model accounts for three situations:

• A facet reflects directly towards the viewer – this is illustrated by facet a inthe diagram.

• A facet is in the shadow of a nearby facet, so it does not receive the fullamount of illumination – facet c is partially shadowed by facet b, shown bythe dotted line.

• A facet reflects, but the reflected rays are re-reflected by another facet anddo not reach the viewer – rays reflected by facet e hit facet f. These multiplereflections contribute to the diffuse reflection of the surface.

The Cook-Torrance model takes these effects into account, and gives the amountof specular reflection of the overall surface ks as

D is the distribution function describing the directions of each facet comprisingthe surface (usually a Gaussian probability function), and G is a factor whichdescribes the amounts by which the facets shadow and mask each other. F is the

To viewer

a

b c d e

fN

Actual surface

ksDGF

π N V•( ) N L•( )------------------------------------------------=

Page 35: Computer Graphics and Visualisation - Lighting and Shading

Texture and transparency

The University of Manchester 29

Fresnel factor, which gives the fraction of light incident on a facet that is reflectedrather than being absorbed. It is defined as

4.2.3 Phong vs Cook-Torrance

The two models give different results. In particular

• The location and intensity of highlights produced by both models are simi-lar for small values of θ, but differ considerably as θ increases towardsgrazing angles.

• The Phong model assumes that highlights are the same colour as the illu-mination. In reality this is not true, since the reflectance of materialsdepends on wavelength. The Cook-Torrance model incorporates this effect(via the Fresnel function) and gives more realistic results.

• The Cook-Torrance model is computational more expensive than Phongand for some purposes the extra cost is not justified.

4.3 The optics of transparencyNot all materials are opaque. Some materials allow light to pass through them,and light passing through a surface is called transmitted, or refracted light. Thereare two kinds of refraction:

• Diffuse refraction – This occurs when a surface is translucent but not trulytransparent. Light passing through it is jumbled up and scattered by inter-nal and surface irregularities. When a scene is viewed through a surfacewith diffuse refraction, it appears blurred to a greater or lesser extent.

• Specular refraction – This occurs when light passes through a transparentobject. A scene viewed through a specular refractor will be clearly visible,although in general the light rays will have been bent as they passedthrough the material (Figure 25).

Figure 25: Refraction.

How light behaves when moving from one medium into another is described bySnell’s Law. With reference to Figure 26

F 12--- φ θ–( )2

sin

φ θ+( )2sin------------------------------- φ θ–( )2

tan

φ θ+( )2tan--------------------------------+=

Page 36: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

30 Computer Graphics and Visualisation

(4-6)

Here η1λ and η2λ are called the refractive indices of the media, and because ηdepends on the wavelength of the incident light, different colours will be bent bydifferent amounts. We commonly use an averaged value for η of a material, ignor-ing wavelength dependence. This means that glass prisms, for example, do notproduce rainbows. They do bend the light, however - otherwise they would beinvisible!

The effect of Snell’s Law is significant: a beam of white light passing from air intoheavy glass (η ≈ 1.5) at an angle of 30° will be bent by about 11°. On leaving asheet of glass the light will be deviated again, and the effect is that the incidentand refracted beams are parallel, but shifted.

Figure 26: Snell’s law.

Note that refraction can have significant visible effects. In Figure 27, we have tworefracting objects A and B, both with η greater that the surrounding medium, andtwo opaque objects C and D. If we look from viewpoint v1, ray r1 will undergorefraction by A and we will see C. If there were no refraction, we would see D.Similarly, looking from v2, we see (after refraction) D – again, without refractionwe would have seen C.

In practice good results can be obtained by simply shifting the path of the incidentlight by a constant (small) amount. This avoids the computational overhead of thetrigonometry. For very thin objects, the path shift is so small it can be safelyignored. We will ignore the path shift caused by refraction, and concentrate on thedrop in intensity as light passes through a transparent material.

η1λ θsin η2λ θ'sin=

Incident ray Reflected ray

Transmitted (refracted) ray

Normal

Interfaceη1

η2

θ

θ'

Page 37: Computer Graphics and Visualisation - Lighting and Shading

Texture and transparency

The University of Manchester 31

Figure 27: Refraction effects (highly exaggerated).

4.4 Modelling transparencyWhen a refracting object is interposed between the viewpoint and a scene, thescene behind is visible through the object, but the object itself is visible too. Figure28 shows a scene viewed from above. Polygon pf is in the foreground of the scene,and pb in the background. To model the refraction due to pf, we add some propor-tion of the intensity of the point b on the background object pb visible through pf.

Figure 28: Modelling transparency.

A

B

C D

v1

v2

r1

r2

(y-axis out of page)

x

z

b

f

pb

pf

Page 38: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

32 Computer Graphics and Visualisation

We introduce a transmission coefficient, t, which is a measure of the transparencyof an object. For a completely opaque object, t = 1, and for a perfectly transparentobject, t = 0. Then the observed intensity I is

This approach also allows us to model non-uniform surfaces: we can make t afunction of position on the surface. For example, if we view a scene through a glassvase, the apparent transparency of the silhouette edges of the vase will be lessthan the surface directly facing the viewer. Figure 29a shows how light travellingthrough the edges of the object has to penetrate more of the material than lighttravelling directly front to back. Figure 29b shows a top view of a scene where anobject B is being viewed through a transparent glass object A.

We want to have less transparency as we move round the surface from p1 throughp2 to p3, and we can use the z-component of the surface normal, nz, at each point pto determine this. A simple linear relationship for the transparency is given by

where tmin and tmax are the object’s minimum and maximum transparencies. Thepoint p1 has nz = 0 hence t = tmin as expected. Similarly, at point p3, nz = 1 and t =tmax. Point p2 has a transparency somewhere between tmin and tmax according toits value of nz.

An alternative relationship has also been proposed which is non-linear:

where m is a power factor of the order 2 or 3.

Figure 29: Non-linear transparency.

I tIf 1 t–( ) Ib+=

t tmin tmax tmin–( ) nz+=

t tmin tmax tmin–( ) 1 1 nz–( ) m–+=

p3p2

p1

(b)(a)

B

A

z

x

(y-axis out of page)

S

R

Ray R travels throughmore material than ray S View

Page 39: Computer Graphics and Visualisation - Lighting and Shading

The University of Manchester 33

5 Ray-tracing

All the illumination models we have seen so far are local, so they suffer from oneomission: it is not possible to model the reflections between objects in a scene.Effectively they consider each surface to be suspended in space, independent ofother surfaces, and illuminated from one or more light sources. We now considerglobal methods which are able to model the object-to-object interactions in a com-plex scene.

The first global illumination algorithm to appear was called recursive ray-tracingwhich produces very realistic results by applying the laws of optics. This methodfollows the transport of light energy along rays through a scene, which means inpractice that only specularly reflected or transmitted light can be accounted for. Inreality, of course, there is a complex mixture of diffuse and specular light withinan environment, particularly diffuse which to a large extent defines the body col-our of a surface. In practice, ray-tracing methods include an element whichapplies a local illumination calculation to simulate diffuse effects. Without this,ray-tracing would only produce images of objects with black, shiny surfaces!

Another feature of ray-tracing is that its technique can be used on many differentkinds of surfaces, not just polygons but also simple primitives such as spheres,cones and tori, as well as parametrically-defined surfaces.

Another powerful and popular method used for the calculation of global illumina-tion is called radiosity which, in contrast to ray-tracing, is particularly suited toenvironments with only diffusely reflecting and transmitting surfaces. Imagesproduced by the radiosity method are highly realistic, some say more so than ray-tracing. However, the radiosity method is mainly limited to the rendering ofobjects defined by meshes of polygons. We shall discuss radiosity in the next chap-ter.

Bare in mind that the motivation behind the development of global methods is thegoal of image realism. Here attempts are made to model the physical nature oflight energy and its interaction with a material environment to produce imageswhich are close to being photographic and indistinguishable from a real scene.The quest for realism in our images, however, has its drawbacks in terms of com-putational cost, which arise simply because we wish to model phenomena whichare considerably more complex than that considered in a local illumination model.

5.1 Basic algorithmThe basic idea of ray-tracing is that an observer sees a point on a surface as aresult of that surface interacting with rays of light emanating from elsewhere inthe environment. When we discussed local illumination models only the interac-tion of the surface directly from the light sources was considered. In general, alight ray may reach the surface indirectly via reflection from or transmissionthrough other surfaces. The ray-tracing method calculates that part of the globalillumination, provided by this object-to-object interaction, which is distributed byspecular light. As such, it is view-dependent since, for example, the position of the

Page 40: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

34 Computer Graphics and Visualisation

reflection of one object in the surface of another changes when the position of theview-point changes.

In principle the ray-tracing algorithm is simple: we trace the path of each lightray emitted from each light source, follow all of its reflections and refractions as itinteracts with the environment, and if it eventually reaches the view-point, use itsintensity to render one part of the image (see Figure 30). In practice this can’t bedone. Each light source emits a infinite number of rays, the majority of whichnever reach the view-point anyway. The answer is to start at the view-point andfollow rays back into the scene.

Figure 30: Ray tracing from the light sources.

As an aid to projecting rays out into the scene we define both a view-point andview-plane in object-space coordinates (a simple viewing system is described inmore detail below). The view-plane is then divided up into a large number ofimaginary pixels which will be mapped onto the real pixels on a graphics display.One ray per pixel is traced from the view-point through the view-plane (this ray iscalled the primary- or eye-ray) and the nearest object with which it intersects isfound. The object thus found will represent the surface which is visible throughthat particular pixel. (Notice how primary ray-tracing is itself a form of hidden-surface removal.)

Mathematically, an arbitrary ray is defined in parametric form as

(5-1)

Light source

A

B

C

Viewing screen

Eye

r t( ) O Dt+=

Page 41: Computer Graphics and Visualisation - Lighting and Shading

Ray-tracing

The University of Manchester 35

where r is the position vector of a point on the ray’s path with parameter t, O isthe position of the origin of the ray and D is a unit vector in the direction of theray. The parameter t is a real-valued number and represents the distance (inwhatever units are in use) along the ray from its origin and for all primary raysthe origin lies at the view-point

The next stage in the algorithm is to calculate the intensity of light leaving thesurface of the object at the point of intersection (see Figure 31). This intensity willbe due to a local component similar in form to that in Equation 2-7 plus compo-nents due to globally reflected and transmitted light. We can write this, for mlight sources, as follows

(5-2)

where

• kd, ks, Ia, n, N, R and V are as previously defined

• and Li are the intensity and direction of light source i

• kr is the global specular reflection coefficient

• Ir is the intensity of the light coming from the direction R

• kt is global specular transmission coefficient

• It is the intensity of the light coming from the direction T

An expression such as that given in Equation 5-2 exists for each colour band weare using.

Figure 31: Ray-tracing geometry

The calculation of the last two terms in Equation 5-2 is accomplished by spawningnew rays at the intersection point in the direction of specular reflection andrefraction (along R and T respectively). Of course, if kr is zero, the surface is per-fectly diffuse and no global specular reflection occurs, so there is no need to spawna new reflected ray. A similar point can be made regarding the refraction of light,

I kdIa kd IpiN Li•( ) ks R V•( )n+

i 1=

m

∑ krIr ktIt+ + +=

Ipi

L

R

N

V

-N

T

Ir I

It

η1

η2

θ1

θ2

Page 42: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

36 Computer Graphics and Visualisation

where if kt is zero then the surface can be considered perfectly opaque, so no lightenergy is transmitted through the surface.

These new rays, called secondary rays, are traced through the scene exactly likethe primary rays were, until they intersect another object surface. Then we canapply Equation 5-2 again at the new intersection point to give us the values of Irand It for the primary ray.

As you can see this process can be repeated again and again, in a recursive man-ner, creating new secondary rays at each intersection point and returning the val-ues Ir and It for each previous ray-trace. The recursive nature of ray-tracing isillustrated in Figure 32 where there are four surfaces S1, S2, S3 and S4 in thescene, all of which have some degree of transparency except S4, which is opaque.

Figure 32: Ray tracing a particular pixel.

We trace the primary ray R in the view direction through pixel p. It first hits S1,producing two secondary rays R1 and T1. There are no other surfaces for T1 to hit,but R1 intersects S2 and produces to new rays R2 and T2. Ray R2 heads off into thebackground while T2 intersects surface S3 giving rise to rays R3 and T3. Ray R3leaves the scene but ray T3 hits surface S4. Since S4 is opaque only a reflected rayR4 is produced.

We can see that rays T1, R2, R3 and R4 won’t make a contribution to the intensityseen through pixel p because they are rays which could never arise from the lightsources.

T1

T2

T3

R

R1

R2

R3

R4

S1 S2

S3

S4

Pixel

Eyepoint

Viewing screen

Reflected ray

Transmitted ray

p

Page 43: Computer Graphics and Visualisation - Lighting and Shading

Ray-tracing

The University of Manchester 37

5.2 A simple viewing systemThe purpose of the viewing system is to aid the computation of eye-rays throughthe image pixels. In this section we will describe the layout and use of a simplebut effective system which can produce perspective views of our scene. The systemis shown in Figure 33.

Figure 33: Viewing system

The eye is located at E with position vector E and is looking in the direction of G.The vector G can be directly calculated from the eye and look point positions. Thetwo viewing angles 2θ and 2φ are the horizontal and vertical fields of view of theview-plane (and hence the image) respectively and indicate the amount of per-spective in the picture. If it is assumed that the pixels in the image are square,then the two angles are not independent. Only one angle needs to be specified (2θ),the other can be computed from knowledge of the image resolution.

One more piece of information is required before the view-plane is uniquelydefined. An 'up' vector (U in the diagram) is needed to indicate the orientation ofthe view-plane about the look direction.

Next a coordinate system is set-up on the view-plane such that the bottom left-hand corner is the origin and the upper right-hand corner is at (1, 1). Now con-struct two vectors X and Y from

These vectors are orthogonal to each other and parallel to the view-plane. Fromthe geometry it can be seen that the vectors H and V can be written

if X and Y are normalised and d is the distance of the view-plane from the eye.Using H and V a point on the view-plane, say S, can be expressed as

U

EX

Y G

V

H

(0, 0)

(1, 1)

φ

θ

X G U×=

Y X G×=

H d θXtan=

V d φYtan=

S E dG 2x' 1–( ) H 2y' 1–( ) V+ + +=

Page 44: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

38 Computer Graphics and Visualisation

where G has now been normalised and (x', y') is the position of S in the view-planecoordinate system. If the point S represents the centre of a pixel then in terms ofits screen position (i, j)

where Rx and Ry are the image resolution. Now it is a simple matter to calculatethe direction of the ray through S as

We now have all the parameters we need to define the ray originating at the eyeand passing through the pixel at point S: once P has been normalised we equatethis vector with D in Equation 5-1, and set O equal to E.

5.3 Reflection and refraction vectorsIn order to trace a ray’s path we must be able to calculate the unit vectors R and Tshown in Figure 31. Referring to Figure 34, the reflected vector is computed bynoting that the vectors R, V and N lie in the same plane and using similar trian-gles.

Firstly, we find the vector N' by multiplying the normal N by cosθ1. Since cosθ1 isjust the dot product of N and V, then

(5-3)

This vector is called the cosine vector of V since its length is just cosθ1.

Figure 34: Computing the reflected vector

From this we can calculate the corresponding sine vector S as

(5-4)

By definition the length of the sine vector is sinθ1 since the vectors V, N' and Sform a right-angled triangle. We can now see that the reflected vector R is givenby

x' i 0.5+Rx

----------------=

y' j 0.5+Ry

----------------=

P S E– dG 2x' 1–( ) H 2y' 1–( ) V+ += =

N' N N V•( )=

N

N'

V R

S S

θ1 θ1

S N' V–=

R N' S+ 2N' V–= =

Page 45: Computer Graphics and Visualisation - Lighting and Shading

Ray-tracing

The University of Manchester 39

or

(5-5)

The direction of the transmitted ray T is found by applying Snell’s law given inEquation 4-6 and noting that T is in the same plane as V and N (see Figure 35).We compute T by finding the sine vector (S) of the incident vector, scaling it usingSnell’s law to get the sine vector (S') of the transmitted vector, computing a cosinevector (N'') for the transmitted ray and then adding the sine and cosine vectors togive the transmitted vector.

The sine vectors, S and S', are related by Snell’s law

(5-6)

The length of the cosine vector is given by

since cos2θ+sin2θ = 1, hence

Figure 35: Computing the transmitted vector

Therefore, the transmitted vector is given by

This equation can be written entirely in terms of the original vectors N and Vusing Equation 5-6, Equation 5-4 and Equation 5-3. After a little simplifying alge-bra we arrive at the following result

(5-7)

R 2N N V•( ) V–=

S'η1η2------S=

N'' 1 S'2

–=

N'' N N''– N 1 S'2

––= =

N

S

V N'

N'' T

S'

η1

η2

θ1

θ2

T N'' S'+η1η2------S N 1 S'

2––= =

Tη1η2------ N N V•( ) V–[ ] N 1

η1η2------

2

– 1 N V•( )2––=

Page 46: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

40 Computer Graphics and Visualisation

If we take a look at Equation 5-6 again, we can see that it is possible for the lengthof S' to be greater than one when η1 > η2. This situation will arise when the lightis moving from one medium to a less dense medium, such as from glass to air.However, by definition the length of a sine vector is less than or equal to one andanything else is physically impossible. The significance of this condition is that itwill alert us to the fact that total internal reflection has occurred. Hence there isno transmitted vector and the internally reflected vector can be calculated by flip-ping the normal N and applying Equation 5-5.

5.4 ShadowsReturning briefly now to the calculation of the intensity of light leaving a surface(see Equation 5-2 and Figure 31), another computation is usually made at thesame time to determine whether the intersection point is in shadow with respectto the light source.

Testing for shadows is very simple in theory. We first construct a ray whose originis at the point of intersection and whose direction is towards the light source, vec-tor L in Figure 31. This ray, called the shadow ray or shadow feeler, is then testedfor an intersection with all the objects in the scene, as in the case with finding theoriginal intersection. If a hit is found and the intersection is nearer than the lightsource then we can say the surface point is in shadow for that light source.

If there are multiple sources of light, shadow rays are traced to each one to deter-mine which will cast a shadow on the surface. Only those which do not block thelight contribute local illumination. The light from the rest should not be includedin the calculation of the intensity leaving the surface in Equation 5-2.

Usually the object whose surface we are shading is included in the list of objectswe are testing the shadow ray against. However, we may come across a problemwhich has been termed ray-tracing acne. This is caused by inaccuracies in the cal-culated position of the intersection point which causes it to fall a very short butnot insignificant distance away from the object’s surface. If the point should fall inthe space behind the surface with respect to the light source, then an intersectionof the shadow ray with its originating surface will be detected. This will produce adark spot on the surface in the image of the object.

Clearly this is an undesirable artefact which can be compensated for in a numberof ways. We can introduce a small tolerance value, say 10-5, and stipulate thatonly intersection distances along the shadow ray greater than this value are genu-ine. Of course, the tolerance value must be greater than any inaccuracies whichwe think the intersection calculations produce but still small enough not to missany real object intersections. Alternatively, we can use this tolerance value tomove the shadow ray origin towards the light source before performing the inter-section tests. Another way, applicable only if we know that the originating surfacecannot 'see' itself, i.e. is convex, is to exclude this surface in the intersection tests.

5.5 The ray-treeWe can visualise the recursion process used in ray-tracing using a diagram calleda ray-tree which depicts the complete pattern of surface intersections that need tobe found in order to find the intensity and colour of the light received by one par-

Page 47: Computer Graphics and Visualisation - Lighting and Shading

Ray-tracing

The University of Manchester 41

ticular pixel. The ray-tree corresponding to the situation in Figure 32 is shown inFigure 36.

When implementing a ray-tracer it is necessary to set-up a maximum depth towhich rays are traced, otherwise we may find we are spending a great deal of timecomputing rays which contribute little to the final image. However, if the ray-treeis not allowed to grow far enough, artefacts may be evident in the resulting pic-ture.

There are three ways in which the depth of the ray-tree may be limited. The first,which may happen as a matter of course, occurs when a particular ray shoots offwithout intersecting any objects, thus terminating this branch of the tree. In ourexample tree this occurs for rays T1, R2, R3, and R4. The colour returning fromthese directions is usually a user-defined background value.

Figure 36: An example ray-tree

Another method is to explicitly set the absolute maximum depth for the ray-treeso that rays are not spawned deeper than this level. For example, if we define raysfrom the eye as depth 0, then a maximum ray depth of 2 will prevent rays T3, R3and R4 contributing to the intensity received by the eye-ray.

Now, experience has pointed out that only a small percentage of a typical sceneconsists of highly reflective or transparent surfaces, so it would be inefficient totrace every ray to the maximum depth. What we need is an adaptive depth control

S1

S2

S3

S4

Eye ray

Shadow ray(s)

T1

T2

T3

R1

R2

R3

R4

Shadow ray(s)

Shadow ray(s)

Shadow ray(s)

Page 48: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

42 Computer Graphics and Visualisation

which can take account of the properties of materials the rays interact with. Thisis our third method.

Rays are attenuated by the global specular reflection and transmission coeffi-cients. So, as an example, the contribution of ray T3 to the final colour at the toplevel is attenuated by the product of the coefficients . If this value isbelow some threshold there will be no point in tracing back further than ray T3.

5.6 Object intersectionsIn this section we will discuss the problem of finding the intersection point of anarbitrary ray with a particular surface. To determine this we must test the rayagainst every surface defined in the scene and if an intersection is found that it isthe nearest to the ray’s origin, that is we need to find the minimum positive valuefor the parameter t in Equation 5-1. As an example of the computation of intersec-tions, we will demonstrate the procedure on two common types of surface: asphere and a polygon.

5.6.1 Sphere intersection

The sphere has one of the simplest kinds of surface and the computation of theintersection point of an arbitrary ray is quite straightforward. We define a sphereusing two parameters: its centre by the position vector C, and its radius R. Pointson the sphere’s surface with position vector r then satisfy the following equation:

(5-8)

This is easy to understand: on the left-hand side r - C is a vector originating at thesphere’s centre and terminating on its surface, so the length of this vector is equalto the radius of the sphere. The dot product of this vector with itself gives thelength squared, hence Equation 5-8.

To find the intersection point we substitute Equation 5-1 into Equation 5-8 andsolve the resulting equation for the parameter t. After some vector algebra wearrive at the following quadratic equation

(5-9)

where we have defined

(5-10)

We can solve this quadratic equation using the standard formula, giving

(5-11)

where .

Depending on the sign of γ there exists one of three situations:

• If γ < 0: the ray misses the sphere

• If γ = 0: one intersection which grazes the surface

kt3kr2

kr1

r C–( ) r C–( )• R2

=

t2 2tµ– λ R2

–+ 0=

µ D T•=

λ T T•=

T C O–=

t µ γ±=

γ µ2 λ– R2+=

Page 49: Computer Graphics and Visualisation - Lighting and Shading

Ray-tracing

The University of Manchester 43

• If γ > 0: two distinct intersections with t = t1 = µ + and t = t2 = µ -

Usually the first two conditions are grouped together since a grazing intersectionis not meaningful.

At this point we can see that there is quite a lot of calculation to be done before wedecide an intersection occurs. Often this can be avoided by using intermediateresults to cut short the computation if we find that an intersection cannot takeplace. Hence we avoid calculations until there are needed. We will now show howthis is done.

First of all, we need to know if the ray’s origin lies inside the sphere. To do this cal-culate the vector T given in Equation 5-10. The length of this vector is just the dis-tance from the ray’s origin to the centre of the sphere, hence we can compare thiswith the sphere’s radius in the following way: if the ray originatesinside the sphere and must intersect the surface. Otherwise, the origin is on oroutside the sphere and an intersection may or may not occur. These two cases areillustrated in Figure 37. We also define here that a ray whose origin lies exactly onthe surface is considered not to intersect the sphere at the ray’s origin.

Figure 37: The ray origin in relation to the sphere

The next step is to find the value of the parameter t for the point on the ray closestto the centre of the sphere (see Figure 38 which shows the case where the ray ori-gin is outside the sphere). This distance is given by the projection of T along theray, i.e. , which as we have seen is equal to µ (we now have a geometricinterpretation for the parameter µ). If µ ≤ 0 the intersections, if any, lie behind theray origin and if the origin is outside the surface, the ray cannot intersect thesphere (Figure 38a). Hence we can stop the calculation here. If µ > 0 and the ori-gin is outside the sphere (Figure 38b), we still cannot say if an intersection occurs,so we proceed to the next stage of the algorithm.

γ γ

T T R2<•

CTO

CTO

RR

Origin outside the sphere Origin inside the sphere

DD

D T•

Page 50: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

44 Computer Graphics and Visualisation

Figure 38: The point of closest approach (t = µ)

The distance along the ray from the point of closest approach to the sphere’s sur-face is determined next. We have two right-angled triangles to help us, as shownin Figure 39. The point Q is the point of closest approach, at a distance d from thesphere’s centre, and l is the distance we require.

Figure 39: Determining the intersection points

Using Pythagoras’s theorem,

and

since OQ2 = µ2 and OC2 = |T|2 = = λ, hence

µ

(a) (b)

T µ

T

O

CT

R d

l

Q

l2

R2

d2

–=

d2 λ µ2

–=

T T•

l2

R2 µ2 λ–+ γ= =

Page 51: Computer Graphics and Visualisation - Lighting and Shading

Ray-tracing

The University of Manchester 45

We can now see how Equation 5-11 works: µ is the distance to the point of closestapproach, while the square root of γ is the (signed) distance from here to the inter-section points, if they exist. We can then test the value of γ to find whether inter-sections occur and if so where the forward one is located.

If the ray’s origin is outside the sphere, we need the value of t which is least,which is

If the ray’s origin is on or inside the surface, we have

Once we know a relevant value of t, we can plug this number into the ray equation(Equation 5-1) and determine the actual position of the intersection point.

5.6.2 Polygon intersection

The ray-polygon intersection calculation is composed of two stages:

• Find where the ray intersects the polygon’s embedding plane and

• if an intersection exists, find if this point is inside the polygon.

The embedding plane of a polygon is just the infinite plane in which it lies. Thisplane is defined by the equation

(5-12)

where N is the unit normal, d is the plane constant and P is the position vector ofany point in the plane. The normal vector is equal to the polygon’s normal and canbe computed from

where V0, V1 and V2 are the position vectors of three of the polygon’s vertices. Theplane constant is given by

From Equation 5-1 and Equation 5-12 we can evaluate the parameter t corre-sponding to the intersection point as

(5-13)

Note first that if then the ray and the plane (and hence the polygon)are parallel and no intersection is possible. Secondly, if t ≤ 0 the intersection liesbehind the ray origin and is of no interest since we only need forward intersec-tions. Thirdly, if t is greater than the current nearest intersection distance thenthe present intersection cannot be the nearest and is rejected. Only if all thesetests are proved false do we proceed to the second stage.

t µ γ–=

t µ γ+=

N P d+• 0=

NV1 V0–( ) V2 V0–( )×V1 V0–( ) V2 V0–( )×

----------------------------------------------------------------=

d V0 N•–=

t d N O•+N D•

-------------------------–=

N D• 0=

Page 52: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

46 Computer Graphics and Visualisation

Figure 40: Jordan curve theorem

We now determine whether the intersection point lies within the boundary of thepolygon. To do this we use an idea known as the Jordan curve theorem which isillustrated in Figure 40. Shown here is a complex two-dimensional polygon and anumber of points (in black) dotted around it. From each point a line is drawn in anarbitrary direction and the number of crossings with the polygon boundary isnoted (points in grey). The theorem states that if the number of crossings is odd,the point is inside the polygon; otherwise it is outside.

Figure 41: Projecting the intersected polygon

To use this theorem we project the polygon onto one of the coordinate planes (xy-plane, xz-plane or yz-plane) making sure, for accuracy reasons, that the projectedarea is the greatest. We can determine which plane by examining the componentsof the normal vector N: if the magnitude of the z-component is the largest then

2 intersections = outside

1 intersection = inside

3 intersections = inside

4 intersections = outside

x (u)

y (v)

zN

Page 53: Computer Graphics and Visualisation - Lighting and Shading

Ray-tracing

The University of Manchester 47

project onto the xy-plane and similarly for the other components. Projection ontothe coordinate planes simply involves ignoring a corresponding component. Forexample, if |Nz| is the largest component, projecting the point (x, y, z) = (a, b, c)onto the xy-plane gives the point (u, v) = (a, b); see Figure 41. We also perform thesame operation on the coordinates of the intersection point. Then we end up witha situation which may look something like that shown in Figure 42a. The planecoordinates of the intersection point are (ui, vi).

Figure 42: Point in polygon test

We can now perform the polygon inside-outside test by translating the polygon sothat the intersection point is at the origin (see Figure 42b). This is simplyachieved by subtracting the point’s coordinates from each vertex. Now use the pos-itive u-axis and test for an intersection with each edge of the polygon. If the totalcount of crossings is odd, then from the Jordan curve theorem we conclude theintersection point is inside the polygon.

What happens if the positive u-axis passes exactly through a vertex? This specialcase can be avoided in the following way: the u-axis divides the plane into twoparts and we declare that vertices that lie on this line are on the +v side. In effectwe have redefined the u-axis to be infinitesimally close to the original axis, but notto pass through any points. Thus the special cases do not occur.

5.7 Acceleration techniquesTesting a ray against every surface for an intersection is a time-consuming opera-tion which gets worse the greater the number of objects in the scene. The opera-tion is also more expensive the greater the image resolution since more eye-raysare generated. It has been estimated that over 90% of the computation time isspent comparing rays with objects. Therefore, it is advantageous to investigateways of cutting down on the time taken to find the nearest intersection and sospeed-up the calculation of an image. A number of schemes are available whicheither limit the number of tests that have to be made, or delay a costly test until acheaper test has confirmed its necessity.

5.7.1 Hierarchical bounding volumes

In the early days of the development of ray-tracing a technique was developedwhich used the concept of bounding volumes. The bounding volume of an object is

u u

v v

(a) (b)

(ui,vi)

Page 54: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

48 Computer Graphics and Visualisation

a simple primitive (e.g. a sphere or box) which has the smallest volume enclosingthe object. During the ray intersection test on an object, a test is first performedon the bounding volume, which is usually easier and faster. Only if the boundingvolume is intersected is the test applied to the object itself.

The use of a sphere as the bounding volume is the most popular and the ray-sphere intersection is particularly efficient considering we only need to figure outif an intersection takes place without actually calculating the intersection point.However, the calculation of the parameters (centre and radius) describing a par-ticular bounding sphere is not that easy. In the ideal case, the sphere with thesmallest volume is determined by simultaneously varying both the centre and theradius and looking for the minimum radius which encloses the whole object. Inpractice, a good approximation is made by calculating the centroid of the object,this will be the sphere’s centre, and finding the distance to the point furthest fromthe centroid, this is equated to the radius.

Bounding volumes can also be used in a hierarchical structure, whereby clustersof bounding volumes are themselves enclosed in a larger volume. The ray intersec-tion testing would proceed by first detecting a hit with the outer bounding volumeand then decide whether it needs to continue into this volume and perform inter-section tests on the inner ones. Of course, we can have volumes within volumeswithin volumes, if we so wish, to any number of levels. This idea would work wellin scenes containing objects which are not uniformly distributed, so that a partic-ular ray doesn’t need to do much work in areas of the scene which it is not 'looking'at. Such a hierarchical bounding volume scheme would result in a considerablespeed-up in the rendering time.

5.7.2 3D spatial subdivision

Other acceleration schemes have also been developed which are more powerful,but no so easy to implement. These methods are based upon the idea of voxels,which are axis-aligned rectangular prisms acting like bounding volumes exceptthey fill all of the space occupied by the scene and are non-overlapping. There aretwo varieties to this method: uniform and non-uniform.

In a uniform arrangement we surround the whole scene with a bounding cuboidand then split this into L×M×N smaller cuboids, which are the voxels. For eachvoxel we keep a list of the objects which encroach into its space. Then, when wetrace a ray through the scene, we need only look at objects which pertain to thevoxels the ray passes through. Figure 43 shows a 2D example of uniform spatialsubdivision of a simple scene containing 8 objects and a ray passing through it.The voxels which are shaded correspond to those which the ray passes throughand we see that a number of objects are intersected by the ray.

We follow the ray voxel by voxel. The first voxel has one object associated with it,but we find that the surface is not intersected. We move onto the next voxel whichcontains two objects, one of which has already been tested and was found not tohave been intersected. This information would have been retained from the timethe first intersection test was performed. The ray intersection tests will show thatthe polygon has been hit, but the intersection point lies outside the current voxel,so we cannot be sure that this is the nearest intersection. We can see from the dia-gram that this is definitely not the nearest intersection, but the computer has stillto discover this! We move on but keep this information to avoid recalculation. Thenext voxel contains a triangle as well as the previous polygon and we find that the

Page 55: Computer Graphics and Visualisation - Lighting and Shading

Ray-tracing

The University of Manchester 49

triangle is hit first, and the intersection point is in the current voxel. Thus wehave found the nearest intersection.

We can see that the ray passes through another object further 'down-stream' (infact there could easily been many more objects) but we didn’t have to perform anyintersection test for this object, which would have been the case without thisacceleration method. In our example scene we needed only to perform 3 out of 8possible intersection tests. In a more realistic 3D scene, containing maybe hun-dreds of thousands of objects, being able to restrict intersection testing to objectsin the neighbourhood of the ray’s trajectory can significantly reduce the renderingtime.

Figure 43: Uniform spatial sub-division (2D analogy)

In the non-uniform method of 3D space subdivision, the aim is to produce a parti-tioning which emphasises the distribution of the objects throughout the scene.The structure used in this case is called an octree, which is a hierachical tree ofnon-overlapping voxels of various sizes. The octree is constructed by first sur-rounding the whole scene with a bounding cuboid. This volume is then dividedinto eight equal-sized sub-volumes or voxels and for each voxel a list of objectsassociated with it is found. Now, depending on whether a specified maximum tree-depth has been reached, each voxel is sub-divided again into eight if the numberobjects in that voxel exceeds a pre-set maximum. In this way the sub-division intosmaller voxels only occurs where there are lots of objects. In regions of the scenewith few or no objects the voxels are large.

Figure 44: Non-uniform spatial sub-division (2D analogy)

Page 56: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

50 Computer Graphics and Visualisation

We will demonstrate the use of an octree by analogy with the 2D case, in whichthe sub-division produces a structure called a quadtree (see Figure 44). Again theshaded voxels indicate those traversed by the ray and sub-division is continueduntil there is at most only one object per voxel.

The procedure for finding the nearest intersection is the same as in the uniformcase: we process in order only those voxels which the ray passes through until wefind an intersection point in the current voxel. In the example shown 6 voxels areconsidered before an intersection is found and only 3 distinct intersection tests aremade. We can see that large areas of the scene are empty, indicated by the largevoxels, where the ray passes through very quickly. Only where there are higherconcentrations of objects does the ray start to do any useful work in finding anintersection.

In any kind of 3D spatial sub-division, there is usually a trade-off between howfinely we sub-divide the scene (influenced by the distribution of objects in thescene and the maximum number of objects per voxel) and the time spent hoppingfrom one voxel to another along the ray. These features can normally be controlledby the user and adjusted for optimum efficiency. When this is done significant sav-ings in rendering time can be accomplished, especially for complex scenes contain-ing huge numbers of objects.

As an indication of what can be achieved, it was once quoted by the developers ofthe uniform sub-division method that a scene consisting of 10,584 objects, whichwas ray-traced in 15 minutes using their technique, took an estimated 40 days torender using the traditional method! This is indeed a very important and powerfultechnique.

5.8 Image aliasingAs we have seen, when we use ray-tracing to generate a synthetic image we passan eye-ray through each pixel in the view-plane. What we have done, of course, issample the view-plane in a number of discrete directions and this sampling proc-ess will introduce into our image an artefact called aliasing.

Aliasing is a direct consequence of trying to represent a continuous signal (the dis-tribution of light intensity across the view-plane) in terms of a sequence of dis-crete samples: if we attempt to reconstruct the signal from our samples (forexample, by displaying the samples on a graphics screen), distortions will appearand the reconstruction will not be exactly the same as the original. It is a wellknown fact from the theory of signal processing, that if we wish to sample a signalwithout any loss or distortion, then we must sample at a rate which is at least twotimes the highest frequency present in the original signal (this is the Nyquist the-orem and the minimum sampling frequency is called the Nyquist limit). However,since the signal representing our scene has an infinite range of spatial frequen-cies, no finite sampling frequency will be sufficiently high to prevent aliasingoccurring. Aliasing causes frequencies higher than the Nyquist limit to appear inthe reconstructed signal at lower frequencies: in effect high frequencies are mas-querading as low frequencies. Hence the term aliasing.

Page 57: Computer Graphics and Visualisation - Lighting and Shading

Ray-tracing

The University of Manchester 51

5.8.1 Anti-aliasing

In practical terms, aliasing in a ray-traced image causes smooth edges (along sur-face boundaries, for example) to appear jagged. This effect is illustrated in Figure45 where in (a) we show a part of an image and the pixel boundaries, and the pro-jection of part of an object onto the image plane, while in (b) we show the result ofsampling the projection at each pixel centre.

Figure 45: Image aliasing

In simple terms we can see that the cause of image aliasing lies in the representa-tion of an finite area of the image (the pixel coverage) by a single point. We canalso see that it may be possible to reduce the affects of aliasing if we could reducesize of this area. This is in fact true and one way of reducing the pixel area is toincrease the resolution of the image, thus increasing the sampling rate. In prac-tice, this anti-aliasing technique, is implemented by the equivalent process ofsending multiple rays through each pixel, i.e. we supersample the image. Thereare two methods available for determining how the samples are distributedwithin the pixel. These perform uniform and jittered sampling.

Figure 46: Supersampling distributions

(a) reality

(b) image

(a) Uniform (b) Jittered

Page 58: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

52 Computer Graphics and Visualisation

In uniform supersampling, each pixel is subdivided into a regular array of sub-pixels and the image is sampled at the centre of each sub-pixel, while in the jit-tered scheme, the sample points in the uniform sampling picture are displaced bysmall random amounts from their original position. In Figure 46 we show anexample of both types of supersampling: in (a) a pixel is uniformly supersampledusing 16 sub-pixels and in (b) a pixel is supersampled using 16 jittered samplesobtained by jittering the grid of points in (a).

Once we have sampled our pixel and traced rays through the sampling points, wemust combine the intensities returned by each sample to arrive at a representa-tive value for the whole pixel. This processing of the samples is called filtering andthere is more than one way of doing this. We could simply average the samples(box filtering), which works quite well in most cases. More realistic filters could beused, such as a gaussian filter, which weighs the values of the samples in the aver-aging according to their distance from the centre of the original pixel. This is infact a good filter to use for images which will be displayed on a CRT graphicsscreen because it mimics closely the intensity variation across a pixel produced bythe monitor electron beam.

Figure 47: Anti-aliasing using uniform supersampling

Shown in Figure 47 is an example of uniform supersampling applied to the imagein Figure 45. We supersample at 9 samples per pixel and use a box filter to calcu-late the pixel values. In the top of the diagram an enlargement of 6 pixels near thecentre of the image is shown with the sub-pixel boundaries indicated by dashedlines. The numbers to the side of this are the pixel values obtained by samplingthe centre of each sub-pixel and averaging the results. The effect of our simpleanti-aliasing technique is shown in the lower part of Figure 47 and we can seehow the edge has become less jagged: there is more of a smooth transition frompixels which see only the object to pixels which do not see the object at all.

A problem with uniform supersampling, especially if the sampling density is stilltoo low, is that aliasing artefacts can still remain, though at a reduced perceptuallevel. The appearance of jagged edges may still be detectable, particularly so withedges which have small or large gradients relative the image raster. The use of jit-tered sampling improves this situation because the aliasing frequencies arereplaced by noise in the image, which the human visual system is more tolerantof.

Pixel values

0 1/9 5/9

7/9 1 1

Page 59: Computer Graphics and Visualisation - Lighting and Shading

Ray-tracing

The University of Manchester 53

5.8.2 Adaptive supersampling

So far in this discussion of aliasing we have assumed that an anti-aliasing tech-nique is applied to every pixel in the image regardless of what each pixel actuallysees. However, it may not always be the case that a particular pixel needs to besupersampled since not all parts of the image will have an edge, or similar highcontrast boundary, passing through it. This means we will be doing more workthan is necessary: many of the pixels will be getting, say 9, samples when only oneis needed.

To remedy this, we need to adopt a method of anti-aliasing which concentratesmost of its effort in the areas of the image that need attention: these methods arecalled adaptive supersampling. In adaptive anti-aliasing, we attempt to locatethose pixels which need special attention. We can do this by first shooting raysthrough the corners of the pixels, rather than their centres, and for each pixelexamine the difference in the intensities between these 4 samples. Should the var-iation across the pixel be too large, then the pixel is split into 4 sub-pixels and theprocess repeated to as many levels as desired. The following simple heuristiccould be used to decide whether the contrast between samples in a pixel or sub-pixel warrants further sub-division,

where Imax is the largest sample returned and Imin is the smallest. Values of C willneed to be determined for each colour band, usually R, G and B, and then if any ofthese exceed pre-set values another level of supersampling is performed. Typicalvalues for the R, G and B thresholds are 0.25, 0.20 and 0.4 respectively. An adap-tive anti-aliasing scheme is illustrated in Figure 48 which shows three levels ofsupersampling.

Figure 48: Adaptive anti-aliasing

First note that in moving from one level of supersampling to the next, we canreuse many of the samples already calculated. Also, it is obvious that far fewersamples are needed in an adaptive, than a global sampling scheme. In the exam-

CImax Imin–

Imax Imin+-------------------------------=

(a) (b)

(c)

Page 60: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

54 Computer Graphics and Visualisation

ple shown, if we were to supersample all 6 pixels to the same level, we would haveto trace 117 eye-rays compared to 62 in the adaptive method. This amounts to asubstantial saving in the rendering time.

5.9 Example imagesIn this final section, we will take a look at some synthetic images which were com-puted using the techniques outlined in this chapter1. The first image we will dis-cuss is shown in Figure 49 which depicts a view of a stylized scene containing alarge smooth ball sitting between two mirrors, one mirror hanging from a walland the other on a stand. The standing mirror cannot be observed directly since itis behind the view-point but its reflections in the other mirror can be seen clearly.

Figure 49: Ray-traced scene containing highly specular surfaces

The whole scene contains 9 spheres (one large ball and 4 smaller balls at the cor-ners of each mirror), 9 cylinders (the mirror frames and the leg of the mirrorstand), 2 polygons (the mirrors themselves), a plane (the wall) and 2 discs (the

1.All of the images were originally computed as full, 24-bit colour pictures, but for printing in this document, eachone was converted to grey-scale (maximum of 256 levels) and the printing used a halftone screen to reproduce thesegrey levels. In consequence, the image quality has diminished greatly and does not compare favourably with the origi-nals. This point should be kept in mind during this discussion. However, the original image files are available via FTPas part of the on-line distribution of this module.

Page 61: Computer Graphics and Visualisation - Lighting and Shading

Ray-tracing

The University of Manchester 55

floor and the base of the mirror stand). Notice how some of the surfaces have beentextured: the floor has a checker board pattern, the mirror frames have a woodtexture and the wall is blotchy. All surfaces have been defined as purely diffuseexcept for the large sphere, which also has a local specular component of 0.9, andthe mirrors, which only have a global specular reflectivity of 0.9. The fact that thereflectivity of the mirrors is not unity produces a darkening of the image in thedirection of the multiple reflections.

There are two point light sources of unit intensity and an ambient source with anintensity of 0.3. Observe the shadows on the wall and the floor which would havebeen rendered very dark without any ambient illumination. The ray-tree wasdescended to a depth of 12 to capture the multiple reflections between the mirrorsand the image was rendered at a resolution of 500x500 pixels using adaptivesupersampling at 16 jittered samples per pixel. A gaussian filter was used toreconstruct the pixel values.

Shown in Figure 50 is another example image which illustrates the ability of ray-tracing to render transparent media. The object in the foreground has been mod-elled as a spherical glass shell of refractive index 1.3 and a global specular trans-missivity of 0.99. The surface of the glass also has a local specular reflectioncoefficient of 0.9 which gives rise to the specular highlights both on the outsideand the inside of the shell. The refraction of light is clearly seen in this image.

The scene contains only one point light source plus ambient at a level similar tothe previous image. Eye-rays were traced to a depth of 5 and rendered at 750x500pixels using adaptive supersampling at 9 jittered samples per pixel.

Figure 50: Ray-traced scene containing transparent surfaces

One important feature to notice about this image is the appearance of the shadowbehind the glass object. What we see is not at all realistic for the following rea-sons. First of all, the refraction of light through the glass has not been taken intoaccount. The intensity of light in the shadow has been calculated by a method sim-ilar to that described in Section 4.4 which only takes into account the attenuation

Page 62: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

56 Computer Graphics and Visualisation

of the light on a straight line path through the glass (see how the inner region ofthe shadow is darker than the periphery where it travels a longer path throughthe glass). It is in fact extremely difficult to calculate realistic shadows behindtransparent objects using conventional ray-tracing. These sorts of shadows, calledcaustics, are in reality a complicated pattern of light and shade caused by thefocusing effect of the glass.

We will now use the image displayed in Figure 49 to illustrate the effect of prema-turely truncating the depth of the ray-tree and also show how aliasing affects theimage quality.

Figure 51 contains four images of the same scene ray-traced to various ray-depths. The first image has a depth of 0 indicating that only eye-rays are followed:effectively only hidden surface removal has been performed. Notice how the mir-ror on the wall is black since no reflected rays have been spawned. The rest of theimage looks unchanged since it is composed only of diffuse surfaces. In the nextimage the depth is 1, so we get one reflection from the mirror on the wall but noth-ing from the other mirror. Increasing the depth increases the number of inter-reflections spawned by the eye-ray, increasing the realism in the image.

Figure 51: Ray-tracing to depths 0, 1, 2 and 3

Page 63: Computer Graphics and Visualisation - Lighting and Shading

Ray-tracing

The University of Manchester 57

To illustrate the effect of aliasing, we will render a small part of the image of Fig-ure 49 with and without anti-aliasing. We will also investigate the effect of differ-ent supersampling methods.

In Figure 52 three images are shown which were calculated using uniform super-sampling. The first image used only one eye-ray per pixel and aliasing is clearlyseen in this enlargement along high contrast boundaries. The second and thirdimages are anti-aliased using 4 and 9 samples per pixel, respectively. Notice thegradual improvement in the image quality, though at the expense of a slight blur-ring around the edges. Using 9 samples per pixel would seem adequate in thisexample image. If you look carefully, there is still a hint of aliasing apparent inthis image, but remember we’re looking at an image under magnification; thealiasing will not be noticable in the full image. In all of these images a very lowcontrast threshold has been set to force supersampling along all edges to illustratethis point.

Now compare this situation with jittered supersampling: see Figure 53. Again 1, 4and 9 samples per pixel are used to calculate the three images. Even with 1 sam-ple we can use jittering, but the result is not particularly pleasant! Again animprovement in the quality of the image results when we use more samples perpixel. In addition, all traces of aliasing are removed in each case when we use jit-tered sampling; the aliasing is replaced with noise in the image. But only whenenough samples per pixel are used is the noise level reduced to an acceptablelevel. The result of using 9 jittered samples will be virtually indistinguishablefrom using 9 uniform samples in the full image.

Page 64: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

58 Computer Graphics and Visualisation

Figure 52: Uniform supersampling at 1, 4 and 9 samples per pixel

Page 65: Computer Graphics and Visualisation - Lighting and Shading

Ray-tracing

The University of Manchester 59

Figure 53: Jittered supersampling at 1, 4 and 9 samples per pixel

Page 66: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

60 Computer Graphics and Visualisation

Page 67: Computer Graphics and Visualisation - Lighting and Shading

The University of Manchester 61

6 Classical radiosity

The radiosity of a surface is a measure of the rate at which energy leaves that sur-face. As well as describing a physical flux, the term radiosity is also used in com-puter graphics to describe algorithms which calculate the distribution of light inan enclosed environment based upon the principle of the conservation of energy.

Radiosity methods attempt to accurately model the effects of ambient light.Before, we considered the ambient component in our illumination model to beIakd, where Ia is constant for the scene. In practice this is not the case, so the radi-osity model was developed in an attempt to express the ambient term exactly.

6.1 The radiosity equationWhen applying the radiosity method to the problem of rendering an image of anenclosure, the main assumption that is made is that all surfaces are ideal diffusereflectors or emitters. This assumption, whilst limiting us to environments whichare diffuse (making it impossible to render metallic or mirrored surfaces) doesleave us with a view-independent rendering technique. Once the distribution oflight throughout the environment has been calculated, the only step required torender different views of the scene is to apply hidden-surface removal techniquesfor each of the required views.

The radiosity method also assumes that the scene is comprised of a number of dis-crete patches, each of which has a uniform radiosity. Then considering the equilib-rium of energy in the scene we can write

(6-1)

where

• Bi is the radiosity of a particular patch i, measured in energy/unit time/unit area (e.g. Wm-2) and is the total amount of energy (emitted andreflected) leaving a unit area of the patch in unit time

• Ai is the area of the patch

• Ei is the light emitted from the patch (also in Wm-2)

• ρi is the diffuse reflectivity of the patch (which serves the same purpose askd) which is the fraction of the energy impinging on the surface which isreflected off it (hence ρi is in the range [0, 1])

• Fji is the form-factor of the jth patch relative to the ith, and represents thefraction of the energy leaving patch j which arrives at patch i withoutinteraction with other patches (Fji is in the range [0, 1])

Looking at Equation 6-1 we can see that the term on the left-hand side is the totalenergy per unit time (i.e. the power) leaving the ith patch. This is then equated tothe sum of two terms, the first representing the total emitted power (if any) andthe second the total power reflected. This last term is easy to understand since

AiBi AiEi ρi BjFjiAjj 1=

n

∑+=

Page 68: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

62 Computer Graphics and Visualisation

BjAj is the power emitted by patch j and of this only the fraction Fji directlyreaches patch i. Hence the sum of all such expressions gives the total power arriv-ing at patch i from all other patches and then only the fraction ρi is actuallyreflected (the rest is absorbed).

Note that all the patches in the enclosure are treated as sources of light, witheither zero or positive values for Ei. Thus images with arbitrarily shaped lightsources can now be rendered by simply setting Ei to some non-zero value.

6.2 Form-factor calculationIn order to solve Equation 6-1 for Bi it is first necessary to calculate the form-fac-tor between every pair of patches in the scene.

As has already been mentioned, the form-factor Fji is equal to the fraction of theenergy that leaves the jth patch and arrives at the ith patch without interactingwith any other patches in the environment. The form-factor makes no attempt toaccount for light that arrives at patch i after having emanated from patch j afterbeing reflected off (say) patch k - this is accounted for by Bk and Fki in Equation 6-1 above.

Figure 54: Form factor geometry.

Figure 54 shows two patches i and j, which are split into a number of elementalareas dAi and dAj (only one shown for each patch for clarity). The elemental areasare separated by a distance R and the normals to the areas are inclined to the linebetween them by angles φi and φj. It can be shown that form-factor between thesetwo elemental areas is exactly given by

(6-2)

With the assumption already made about the radiosity being constant across eachpatch, the patch to patch form-factor is simply the area average of Equation 6-2,

Ni

Nj

Ai

dAi

φi

φj

R

dAj

Aj

FdAjdAi

φi φjcoscos

πR 2----------------------------dAi=

Page 69: Computer Graphics and Visualisation - Lighting and Shading

Classical radiosity

The University of Manchester 63

(6-3)

Note that the limitation caused by this assumption can be circumvented by split-ting any patches which do not have non-zero radiosity gradients across their sur-face into smaller patches across which this gradient is negligible.

Equation 6-3 assumes that every part of patch i can be seen from every part ofpatch j, which is not generally the case. To account for occlusion by other patches afunction Hji is introduced which has a value 1 if a straight line from dAi to dAjdoes not pass through any other patch and 0 otherwise. Hence Equation 6-3becomes

(6-4)

6.2.1 Reciprocity relation

There is a relationship between the form-factor Fji and its converse Fij (the form-factor of the ith patch relative to the jth) which states that

(6-5)

This is called the reciprocity relation for form-factors. This relation is very usefulwhen implementing an algorithm that evaluates the form-factors for an enclosure,since it is clear that we need only to calculate the form-factors Fij for j > i. The nform-factors Fii will all be zero for a scene made up of flat patches and the remain-ing n(n-1)/2 form-factors can be found from Equation 6-5.

Another useful relation that can be applied to the calculation of form-factors is thefollowing

(6-6)

which is derived from the definition of the form-factor as being the fraction of theenergy that leaves patch i and impinges directly on patch j. Since we are assum-ing that the environment is closed then Equation 6-6 is an equivalent statementof the principle of the conservation of energy. Equation 6-6 is usually employed asa check of the accuracy of the method which calculates the form-factors.

6.3 Solving the radiosity equationHaving established the form-factor between all pairs of patches in the enclosure,it should now be possible to solve Equation 6-1 for the radiosity values Bi.Thistask is made simpler by applying the reciprocity relation in Equation 6-5 to Equa-tion 6-1. This enables us to write

(6-7)

Fji1Aj-----

φi φjcoscos

πR2----------------------------

Ai∫ dAidAj

Aj∫=

Fji1Aj-----

φi φjcoscos

πR 2----------------------------Hji

Ai∫ dAidAj

Aj∫=

AiFij AjFji=

Fijj 1=

n

∑ 1=

Bi Ei ρi BjFijj 1=

n

∑+=

Page 70: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

64 Computer Graphics and Visualisation

To understand how to solve these equations, we will consider a simple situationwith three patches: i = 1, 2 and 3. Applying Equation 6-7, we can derive expres-sions for the radiosities of each surface

(6-8)

We now have three simultaneous equations in the three unknowns B1, B2 and B3.We can compute all the form-factors from the geometry of the scene, and we knowE for each surface – if it is a light emitter we know its intensity; if it is not, E = 0.

We can rearrange the equations into matrix form

(6-9)

and solve this system for the Bi terms using a standard technique such as Gaus-sian elimination.

Note that the equations of the form shown in Equation 6-9 need to be set-up andsolved for each wavelength of light of interest. But typically these equations aresolved for wavelengths which correspond to red, green and blue light so that theBi can easily be rendered.

6.4 The hemi-cube algorithmThe hemi-cube method was one of the first numerical techniques for evaluatingform-factors in complex multi-object environments. This algorithm is an approxi-mation to the integral in Equation 6-4 in which an assumption is made that theseparation distance between patches is large compared to their size. This is fre-quently true, so the inner integral in Equation 6-4 is approximately constant nomatter where the elemental area dAj is situated on patch j. We can thus write

(6-10)

which effectively is the form-factor between the elemental area dAj and the finitearea Ai.

To evaluate the integral in Equation 6-10 an imaginary hemi-cube is centredabout patch j, that is to say a cube of edge length 2 which has two of its faces par-allel to the patch. Only half of the cube which is 'in front' of the patch is relevant,hence the term hemi-cube. The situation is illustrated in Figure 55.

The faces of the hemi-cube are divided into pixels (not to be confused with pixelson a graphics screen) of equal size. The number and size of these pixels are imple-mentation dependent - higher densities giving greater accuracy in the value of the

B1 E1 ρ1 F11B1 F12B2 F13B3+ +( )+=

B2 E2 ρ2 F21B1 F22B2 F23B3+ +( )+=

B3 E3 ρ3 F31B1 F32B2 F33B3+ +( )+=

1 ρ1F11– ρ1F12– ρ1F13–

ρ2F21– 1 ρ2F22– ρ2F23–

ρ3F31– ρ3F32– 1 ρ3F33–

B1

B2

B3

E1

E2

E3

=

Fji

φi φjcoscos

πR2----------------------------Hji AidAi

∫ 1Aj----- Ajd

Aj∫≈

φi φjcoscos

πR2----------------------------Hji AidAi

∫=

Page 71: Computer Graphics and Visualisation - Lighting and Shading

Classical radiosity

The University of Manchester 65

form-factor. Typically resolutions for the top-face range from 50x50 to many hun-dred squared.

Figure 55: The projection of a patch onto a hemi-cube

The next step in the algorithm is to project every patch in the enclosure onto thehemi-cube and determine which pixels the projection covers. Two patches whichproject to the same pixel can have their depth compared and the more distant onerejected, since it cannot be seen from the receiving patch. This operation accountsfor the Hji term in Equation 6-10. The hemi-cube faces are therefore effectivelyacting like a Z-buffer, except no intensity information is considered, only the iden-tity of the patch involved.

The form-factor associated with each hemi-cube pixel, known as the delta form-factor, is pre-calculated and referenced from a look-up table at run-time. Once thedelta form-factors have been found we can calculate the form-factor of a particularpatch simply by summing the delta factors of all the pixels the patch projects to,i.e.

where ∆Fq is the delta form-factor of pixel q.

Having found the n form-factors, the imaginary hemi-cube can now be centredabout another patch in the enclosure and the process repeated until all form-fac-tors have been found.

Nj

Patch iPatch j

Fji ∆Fqq∑=

Page 72: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

66 Computer Graphics and Visualisation

6.4.1 Calculating the delta factors

Referring back to Equation 6-2 we can see that the delta form-factor of a hemi-cube pixel q, is given by

(6-11)

where

• φi is the angle between the normal to patch i and the line joining the centreof the patch to pixel q

• φq is the angle between the normal to the face containing pixel q and theline joining the centre of patch i to pixel q

• r is the distance of hemi-cube pixel q from the centre of patch i

• ∆A is the area of the hemi-cube pixel q

Remember that in the hemi-cube algorithm we are modelling patch i as an ele-mental area so that the result of Equation 6-10 can be used, and approximatingthe hemi-cube pixel also as an elemental area. Hence we can use Equation 6-2.

For a pixel on the top-face of the hemi-cube (see Figure 56)

where

and the position of the hemi-cube pixel q is given by (x, y, z) in a local coordinatesystem centred on the patch. Thus from Equation 6-11 we get

Figure 56: Delta form-factor for a pixel on the top face

∆Fq

φi φqcoscos

πr 2------------------------------∆A=

φicos φqcos 1r---= =

r x2 y2 1+ +=

∆Fq1

π x2 y2 1+ +2-----------------------------------------∆A=

x

y

pixel q

φi φq

1

X

Y

Z

Page 73: Computer Graphics and Visualisation - Lighting and Shading

Classical radiosity

The University of Manchester 67

Figure 57: Delta form-factor for a pixel on a side face

For a side-face (see Figure 57)

where

hence

For a side-face perpendicular to the X-axis, substitute y for x in these expressions.

Because of the hemi-cube’s symmetry, it is only ever necessary to store 1/8 of thedelta factors associated with the top-face and 1/4 of those on a side-face.

6.5 RenderingAt this point we have applied the hemi-cube algorithm at every patch in the enclo-sure and derived all of the entries in the form-factor matrix. We can now solveEquation 6-7 for each colour band of interest and then render a view of the sceneby converting the value of Bi into (R, G, B) values and using a conventional hid-den-surface removal algorithm.

However, the result will be a picture made up of different coloured facets - anunsatisfactory situation which is improved upon by extrapolating the patch radi-osities to the patch vertices and rendering an image using Gouraud interpolationacross each patch.

x

zpixel q

φi

φq

1

X

Y

Z

r

φicos zr-- and φqcos 1

r---= =

r x2 z2 1+ +=

∆Fqz

π x2 z2 1+ +2-----------------------------------------∆A=

Page 74: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

68 Computer Graphics and Visualisation

Figure 58: Extrapolation of the patch radiosities to patch vertices

Figure 58 shows two different situations that can arise when trying to extrapolatepatch radiosities to patch vertices. Figure 58a shows the most common case. If avertex is interior to a surface, for example e, the radiosity assigned to it is theaverage of the radiosities of the patches that share it,

If it is on the edge, for example b, we find the nearest interior vertex (vertex eagain) and note that the midpoint X between b and e can have its radiosityexpressed in two ways, so

giving

In order to establish the radiosity of a vertex such as a, we again find its nearestinterior vertex, e, and note that the average of Ba and Be should be the value of B1,hence

In the case illustrated in Figure 58b none of the results above can be applied asthere are no interior points. Although it is always possible to extrapolate to thevertices in this case, it is non-trivial to recognise the very many different scenar-ios that can arise and there seems to no general formula that covers all cases.

a b c

d e f

g h i

a b

c d

e f

B1 B2

B3 B4

B1

B2

X

(a) (b)

Be

B1 B3 B3 B4+ + +

4------------------------------------------------=

Bb Be+

2--------------------

B1 B2+

2--------------------=

Bb B1 B2 Be–+=

Ba 2B1 Be–=

Page 75: Computer Graphics and Visualisation - Lighting and Shading

Classical radiosity

The University of Manchester 69

6.5.1 Light and shadow leaks

A problem is often encountered in radiosity images which results when two sur-faces meet but their respective mesh of elements do not coincide. To explain howthis occurs consider the schematic diagram in Figure 59 showing 6 elements of asurface and another surface perpendicular to it at an arbitrary angle. Note alsothat light is illuminating the surfaces from top-right.

Figure 59: Light and shadow leaks

Each element will receive a certain level of illumination from the light source andsurrounding surfaces, calculated using a hemi-cube situated at the centre of theelement. The numbers shown at the centres of the elements indicate whether theyhave received any radiosity directly from the illuminating sources. When we cometo render the scene we must extrapolate the radiosities to the element vertices sothat, for example, the middle element will get radiosities proportional to the num-bers shown. Hence, the element will be rendered with a gradation of light frombottom-left to top-right, even though most of it should be dark. The visual result ofthis is that light will appear to leak onto parts of the surface which physically itcan’t reach. The converse will also occur where the shadow leaks in the oppositedirection, into a region where no shadows should occur.

Usually, only one type of leaking is apparent, depending on the contrast of radios-ity between the element’s vertices. In the example discussed above the light leakwill predominate with the effect that the object whose surface is causing the shad-owing will appear to float above other surface. We will see real examples of thisartefact later. We will also see how this artefact can be reduced using adaptiverefinement of patches.

6.6 Progressive refinement radiosityThe basic radiosity method described thus far gives very realistic results for dif-fuse environments, but it can take an enormous amount of time to compute thepicture as well as take up substantial amounts of storage. Before an image can berendered it is first necessary to calculate and store the values of nxn form-factors.This takes of the order n2 time to do. It is also necessary to solve for the patchradiosities in each colour band, taking up another long period of time.

0

0

000

0

1

1

1

1/4

1/4

3/4

0

Page 76: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

70 Computer Graphics and Visualisation

The progressive refinement approach attempts to overcome these problems bymaking a subtle change to the existing radiosity algorithm. The new techniqueprovides storage costs of the order n as well as ensuring that an initial renderingof the scene is available after a time of the order of n.

The aim of progressive refinement radiosity is to combine image realism with userinteractivity. This is accomplished by quickly finding an approximation to Equa-tion 6-1 and then render an image. This approximation is then iterativelyimproved upon and the image updated every time an improvement is made. Thisprogression can be halted by the user as soon as they think the image to be of suit-able quality.

6.6.1 Formulation of the method

The reformulation of the radiosity algorithm is this: Consider a single term fromthe summation in Equation 6-1 - this states that the portion of Bi which is due toBj is

(6-12)

So by placing a single hemi-cube at patch j it is possible to calculate all n form-fac-tors Fji. However, if Equation 6-12 is to be usefully applied in some graceful itera-tive process, it would be better to calculate the increase in Bi since the last time Bjcontributed to it. We can do this by writing

(6-13)

where ∆Bj is the difference between the current estimate for Bj and the value thatwas used last time patch j was used to shoot light from. The term ∆Bj is effectivelythe unshot radiosity from the jth patch.

So now, after having only found n form-factors (by placing one hemi-cube at patchj) it is possible to determine how much of the radiosities of all patches i willincrease due to the contribution from Bj. This process is described as shooting thelight out from patch j to the rest of the enclosure. In particular, if patch j is chosento be the patch with the greatest contribution still to make (∆BjAj is largest) thenan illuminated image can be rendered after only one hemi-cube calculation. If thepatch is chosen randomly then it will typically be the case that the scene is stilldark (apart from the light sources) after several such calculations. Sorting of theshooting patches in terms of their unshot radiosity greatly increases the speedwith which the solution converges.

After one iteration the rendered image will not be 'correct' as the values used for∆Bj in Equation 6-13 are all underestimates. The algorithm continues to iterateuntil the increase in all patch radiosities is deemed negligible. It should be notedhowever, that the picture is invariably deemed to be of a suitable standard withinone iteration if sorting is utilised.

Bi due to Bj( ) ρiBjFji

AjAi------=

Bi due to ∆Bj( ) ρi∆BjFji

AjAi------=

Page 77: Computer Graphics and Visualisation - Lighting and Shading

Classical radiosity

The University of Manchester 71

6.6.2 Ambient contribution

The early stages of the progressive refinement radiosity procedure generally pro-duce poor images, particularly in those areas of the enclosure that do not receivedirect illumination. To compensate for this downfall (caused by the fact that thealgorithm is not calculating radiosities simultaneously), an ambient term, used fordisplay purposes only, is employed. The ambient term is based on the averagereflectivity of the environment and also on the current estimate of the radiositiesof all the patches. This means that it 'gracefully' lessens as the real solution con-verges.

In order to calculate the ambient contribution, we first assert that a reasonableguess for the form-factor from any patch j to a given patch i can be found from

which is the fraction of the area of patch i to the total surface area of the enclo-sure.

The average reflectivity for the enclosure is taken to be the area weighted averageof the patch reflectivities, i.e.

(6-14)

Given any energy being emitted into the enclosure, a fraction ρav will, on average,be reflected back into the enclosure, and ρav of that will, on average, be reflectedback, and so on. So the overall inter-reflection coefficient for that enclosure isgiven by

(6-15)

Since 0 ≤ ρj ≤ 1, then from Equation 6-14 the same inequality holds for ρav, soEquation 6-15 can be written, using the binomial theorem, as

The ambient radiosity term, is then taken to be the product of the inter-reflectioncoefficient with the area-weighted average of all the radiosity which has yet to beshot via a form-factor computation. So

All that remains now, in order to more realistically render the kth patch, is to addρkBamb to whatever value was returned by the algorithm for Bk. This revisedvalue plays no part in further calculations and is included purely for display pur-poses. Note also, that as the solution converges, ∆Bi → 0 and so Bamb → 0, thusthe ambient term gracefully vanishes.

Fjiest Ai

Akk 1=

n∑-------------------------=

ρav

ρjAjj 1=

n∑Ajj 1=

n∑----------------------------=

1 ρav ρav2 ρav

3 ...+ + + +

1 ρav ρav2 ρav

3 ...+ + + + 11 ρav–------------------=

Bamb1

1 ρav–------------------ ∆BiFji

est

i 1=

n

∑=

Page 78: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

72 Computer Graphics and Visualisation

6.7 Improving the radiosity solutionHaving shaded a scene it is often the case that shadows and other areas where theradiosity changes rapidly are poorly represented. This is because the samplingdensity for working out the radiosity at a given point is only as fine or coarse asthe patch density: the more patches, the greater the number of points with knownradiosity values and the better the solution.

One way to overcome this problem is simply to increase the number of patches inthe scene. However, this will also increase the number of form-factors to calculate:for example, doubling the number patches will create a four-fold increase in thenumber of form-factors. More importantly, the computation time will alsoincrease.

6.7.1 Substructuring

A better solution is to use a technique called substructuring which works as fol-lows. First we need some new terminology that distinguishes a patch (asdescribed earlier) from an element, which is some fraction of a patch that has beenobtained by a binary subdivision of a patch or another element. Elements are thesmallest units for which a radiosity value is calculated. Also, elements areindexed using the letter q, while patches use i and j.

Now assume that the environment has been split into n patches as before and thatsome or all of these patches have been split into a number of elements, so thatthere are a total of m elements in the scene. We can now start to view the situa-tion in terms of the element radiosities. From Equation 6-7 we know that the radi-osity of element q can be written as

(6-16)

Suppose that patch i has been divided into t elements. Then the radiosity of patchi can be represented as an area-weighted average over the patch of the t elementradiosities

which on substituting Equation 6-16 gives

(6-17)

We have already assumed that the global radiosity does not vary across a patch. Ifwe also assume that neither ρq nor Eq can vary across a patch then Equation 6-17can be simplified to

(6-18)

Comparing this equation with Equation 6-7 we see that

Bq Eq ρq BjFqjj 1=

n

∑+=

Bi1Ai------ BqAq

q 1=

t

∑=

Bi1Ai------ Eq ρq BjFqj

j 1=

n

∑+ Aqq 1=

t

∑=

Bi Ei ρi Bj1Ai------ FqjAq

q 1=

t

∑j 1=

n

∑+=

Page 79: Computer Graphics and Visualisation - Lighting and Shading

Classical radiosity

The University of Manchester 73

(6-19)

which is a better approximation to the assumption that the inner integral inEquation 6-4 is constant across the whole patch i (the assumption now is that onlythe integral is constant across each element). Because of this improvement, theresulting patch-to-patch form-factors are more accurate.

Therefore, substructuring allows us to obtain accurate values for the elementradiosities using the radiosity values already obtained from the coarse-patch solu-tion of the environment by simply using Equation 6-16. We illustrate this processin Figure 60.

Figure 60: Capturing radiosity gradients

Shown in Figure 60a is the true variation of the radiosity across a patch plotted asa two-dimensional surface, while in Figure 60b is the coarse-patch representationwhich is clearly a poor approximation. However, when the patch is split into 16elements and a radiosity computed for each one, the resulting representation ismuch more faithful (Figure 60c).

6.7.2 Adaptive subdivision

Adaptive subdivision is a technique whereby only those patches (or elements) thathave a radiosity gradient across their surface exceeding some predefined level aresubject to subdivision.

By making an initial guest to which parts of the picture should be subdivided a'first attempt' solution can be obtained exactly as described in Section 6.3. Thepatches and elements can now be examined and checked for excessive radiosity

Fij1Ai------ FqjAq

q 1=

t

∑=

(a) (b)

(c)

Page 80: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

74 Computer Graphics and Visualisation

gradients and if necessary a further subdivision into elements is undertaken.Then a second solution is computed and the process repeated until none of theradiosity gradients are deemed too steep.

The resulting image is one which may well contain less elements than an accepta-ble image computed using simple substructuring, but whose shadow definitionand shading are far superior, as the element density at critical points in the imagewill be much higher.

Figure 61: Adaptive subdivision

Figure 61 shows two cases in which a patch, which has a shadow boundary acrossits surface, has been subdivided in an effort to accurately account for the steepradiosity gradient caused by the shadow. In both cases the patch was split (withan initial guess) into four elements. Figure 61a shows the case where the ele-ments have been split using adaptive subdivision and Figure 61b shows the casewhere the same resolution as in Figure 61a has been attained, but this time usingsimple substructuring. Note that the same image should result from bothapproaches, but the structure represented in Figure 61a will require far less stor-age space and computation than for that represented in Figure 61b.

We can use this adaptive subdivision technique within the progressive radiositysolution method in a similar way. As in the full matrix algorithm, patches are splitinto elements and these elements act as receivers of light from patches, not otherelements. So a hemi-cube is placed at every patch in the scene and elements areprojected onto its surface in order to find the m patch-to-element form-factors Fiq.In effect, patches shoot light out to elements. Other aspects of the adaptive subdi-vision of patches are exactly the same as for the full matrix situation.

(a)

(b)

Page 81: Computer Graphics and Visualisation - Lighting and Shading

Classical radiosity

The University of Manchester 75

6.8 Example imagesTo illustrate the power of the radiosity method, we will show in this section a fewexample images which were computed using the progressive refinement tech-nique. In particular we will show how radiosity solutions are adept at capturingnatural features, such as soft shadows and colour bleeding1, which ray-tracingwould find hard to simulate.

Our first example image is shown in Figure 62 depicting 5 views of a simple closedenvironment. The scene is composed of a large room with an area light source inthe centre of the ceiling. The walls of the room are coloured red, blue, light-grayand black. Inside the room there are two other objects sitting on the floor: a smallyellow box and a taller box with mid-gray sides and light-gray top.

The walls, floor and ceiling have each been divided uniformly into 100 elements(52 patches), the yellow and gray boxes into 20 elements (5 patches) each and thelight source is represented by 1 element (1 patch). The calculation required 1506iterations to converge: the solution was deemed converged when the total unshotenergy in the environment was less than 0.1% of that input by the light source. Ofcourse, the views shown in Figure 62 only require one solution, each view beingquickly calculated using this single solution.

These images illustrate clearly how radiosity can render realistic shadows. Anarea light source, like the one used in this demonstration, always casts soft-edgedshadows with umbra and penumbra. However, notice how the boxes appear to befloating above the floor. This is caused by a combination of light and shadow leakswhich we discussed earlier. Also note the strong colour bleeding between surfaces,particularly the effect of the red wall on the side of the yellow box and of the bluewall on the sides of the gray box.

Figure 63 shows the progressive refinement technique in action on a single view ofthe environment described above. The sequence of images represents the state ofthe refinement after 1, 2, 20 and 100 iterations with 100%, 45%, 34% and 21%unshot energy remaining, respectively. No estimated ambient contribution wasused in the rendering of these views. After 1 iteration only the energy from thelight source has been distributed amongst the other patches and we can see thatonly surfaces in the line of sight of the light source have received (and reflected)any light. The rest of the scene is black. After 2 iterations the patch with the larg-est unshot energy, actually a patch on top of the tall box, has shot its energy to theenvironment, most noticeably to the section of the ceiling immediately above it,which brightens accordingly.

After each iteration the amount of unshot energy decreases, resulting in a gradualbrightening of the scene. Even after 100 iterations the image looks almost thesame as the fully converged solution and we would be tempted to stop the calcula-tion here. However, it is important to continue since many more iterations areneeded to show subtle detail in the shadows and in the inter-reflection of lightbetween the various surfaces.

The sequence of images shown in Figure 64 uses the results of the progressiverefinement calculation shown in Figure 63, but uses an ambient contribution(estimated from the total unshot energy) when rendering each image.

1.Again it is emphasised that the quality of these black & white prints does not match the quality of the original full-colour images. Refer to the image files distributed with this document if necessary, especially for the colour bleedingeffects mentioned in the text.

Page 82: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

76 Computer Graphics and Visualisation

Figure 62: Five views of a simple scene computed using the radiosity method

Page 83: Computer Graphics and Visualisation - Lighting and Shading

Classical radiosity

The University of Manchester 77

Figure 63: Progressive refinement without ambient display term

Figure 64: Progressive refinement with ambient display term

Page 84: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

78 Computer Graphics and Visualisation

In the next example image we will illustrate the adaptive refinement techniquefor patch subdivision and show how the method is used to increase the accuracy ofthe radiosity solution.

Figure 65 shows another simple environment, similar to the one we saw earlier,and the results of a complete radiosity solution. Also shown is the distribution ofpatches and elements into which each surface was divided.

There are notably two regions of the scene which have been heavily subdivided:the area where the ceiling meets the walls, and around the edge of the box and itsshadow. The region around the shadow will have a particularly high radiosity con-trast which has triggered the refinement, resulting in the better shadow definitionrendered in this image. However, adaptive refinement can never be perfect and wecan see some rendering artefacts which cause the shadow boundary to break up inplaces. Also notice that the refinement has eliminated most of the shadow leakaround the edge of the box, so that this time the box looks to be firmly planted onthe floor!

Finally, shown in Figure 66 is a rendering of a scene which is much more complexthan any we have considered so far.

Page 85: Computer Graphics and Visualisation - Lighting and Shading

Classical radiosity

The University of Manchester 79

Figure 65: A radiosity simulation with adaptive patch refinement (Courtesy ofFraunhofer-IGD, 26483 Darmstadt, Germany)

Page 86: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

80 Computer Graphics and Visualisation

Figure 66: A radiosity simulation of a complex scene with adaptive patch refine-ment (Courtesy of Fraunhofer-IGD, 64283 Darmstadt, Germany)

Page 87: Computer Graphics and Visualisation - Lighting and Shading

The University of Manchester 81

A Vectors

A.1 What is a vector?A vector is a quantity with magnitude and direction. It can be defined as adirected line segment in some coordinate system. Figure 67a shows a vector withendpoints (x1, y1, z1) and (x2, y2, z2).

We can shift the vector so that one of its endpoints is at the origin of the coordi-nate system (Figure 67b). This will not affect either its magnitude or direction.Now we can describe the vector by the single endpoint (x', y', z'). The vector nowrepresents a point in the coordinate system, and x', y' and z' are called the vector’scomponents.

The magnitude (or length) of a vector v is written |v|, and can be computed fromPythagoras’s theorem

If the magnitude of a vector is 1, it is said to be normalised, and is called a unitvector. We will use the notation v to indicate that a vector v is a unit vector.

Sometimes a vector is expressed in terms of three unit vectors parallel to the x-, y-and z-axes, called i, j and k respectively. So a vector v can also be written

Figure 67: Vectors in a coordinate system.

A.2 Vector additionFigure 68a shows two vectors v1 = (x1, y1, z1) and v2 = (x2, y2, z2). We define thesum of two vectors by adding their x-, y- and z-components respectively

v x2 y2 z2+ +=

v xi yj zk+ +=

(x2, y2, z2)

(x1, y1, z1)

x

y

z

x

y

z

(a) (b)

(x', y', z')

v1 v2+ x1 x2 y1 y2 z1 z2+,+,+( )=

Page 88: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

82 Computer Graphics and Visualisation

The resulting vector is shown in Figure 68b.

Figure 68: Adding vectors v1 and v2.

A.3 Vector multiplicationThere are two ways to multiply vectors: one method gives a scalar result, and theother a new vector.

The first method, called the dot product (or scalar product) is defined as

or alternatively, using the magnitude of the vectors and the angle θ between them

The second method of multiplying vectors is called the cross product (or vectorproduct). This is defined as

where u is a unit vector perpendicular to both v1 and v2.

A.4 Determining the surface normalIn rendering, we will often need to know the surface normal vector for a surface,since this is used to measure the orientation of the surface. There are several tech-niques for determining the normal vector.

If we have an analytical definition of the surface, we can derive the normal vectorfor any point on the surface directly. For a planar polygon whose plane equation is

Its surface normal N is given by

where i, j and k are the unit vectors in the x-, y- and z-directions respectively.

x

y

z

x

y

z(a) (b)

v1

v2

v1

v2

v1+ v2

v1 v2 x1x2 y1y2 z1z2+ +=•

v1 v2• v1 v2 θcos=

v1 v2× v1= v2 θusin

ax by cz d+ + + 0=

N ai bj ck+ +=

Page 89: Computer Graphics and Visualisation - Lighting and Shading

The University of Manchester 83

If the plane equation is not available (as will often be the case), the normal for avertex of a polygon can be derived from two edges that terminate at the vertex. Ifthe two edges are E1 and E2, the normal vector N is given by the cross product ofthe two edge vectors

This is illustrated in Figure 69.

Figure 69: Determining the surface normal N from adjacent edges E1 and E2.

N E1 E2×=

N

E1

E2

Page 90: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

84 Computer Graphics and Visualisation

Page 91: Computer Graphics and Visualisation - Lighting and Shading

The University of Manchester 85

B Resources

B.1 BooksWatt A., Fundamentals of Three-Dimensional Computer Graphics, Addison Wes-ley 1989.

Foley J.D., van Dam A., Feiner S.K. and Hughes J.F., Computer Graphics: Princi-ples and Practice, 2nd edition, Addison Wesley 1990.

Glassner A. (Ed.), An Introduction to Ray Tracing, Academic Press 1989.

Ashdown I., Radiosity: A Programmer’s Perspective, John Wiley 1994.

Cohen M.F. and Wallace J.R., Radiosity and Realistic Image Synthesis, AcademicPress Professional 1993.

Sillion F.X. and Puech C., Radiosity and Global Illumination, Morgan Kaufmann1994.

Glassner A., Principles of Digital Image Synthesis, (2 vols.), Morgan Kaufmann1995.

Hall R., Illumination and Color in Computer Generated Imagery, Springer-Verlag1989.

B.2 Research literatureIf you are interested in pursuing topics in state-of-the-art computer graphics thereare several very good sources of research literature. Of relevance here are:

• IEEE Computer Graphics & Applications

• Computer Graphics Forum

• ACM Transactions on Graphics

• Eurographics Conference Proceedings (annual), published as special edition ofComputer Graphics Forum

• Eurographics Workshop on Rendering (annual)

• ACM SIGGRAPH Conference Proceedings (annual), published as a special edi-tion of ACM Computer Graphics

B.3 SoftwareThe example images included in the chapter on ray-tracing were generated usinga package called Rayshade, created by Craig Kolb of Princeton University. Thisray-tracing program is freely available on the internet and you can download thesource code from:

• WWW at http://www-graphics.stanford.edu/~cek/rayshade/

Page 92: Computer Graphics and Visualisation - Lighting and Shading

Lighting and Shading

86 Computer Graphics and Visualisation

• anonymous FTP at graphics.stanford.edu (36.22.0.146) in directory/pub/rayshade/

The software runs on a variety of UNIX platforms; there is also versions forMSDOS, Macintosh, Amiga and OS2.

There are several other ray-tracing programs available for downloading. A goodstarting point for looking at these is the ray-tracing WWW home page at

• http://arachnid.cs.cf.ac.uk/Ray.Tracing/

which also contains pointers to other ray-tracing resources on the internet.

The example images included in the chapter on radiosity were generated using apackage called Helios, created by Ian Ashdown of Ledalite Architectural ProductsInc. Although this program is not freely available (the source code is distributedon a diskette which you can buy in conjunction with his book), an almost fully-functioning demonstration version is. You can find the program on the WWW at

• http://www.ledalite.com/library/rrt.html

The demo is a ready compiled executable for running under MS-Windows. Thissite also contains other sources of information relevant to lighting and illumina-tion. You can find this information at

• http://www.ledalite.com/library/ledapub.html

For a more advanced radiosity system, you may wish to take a look at a programcalled RAD written by Philippe Bekaert of the University of Leuven, the sourcecode of which is available from

• http://www.cs.kuleuven.ac.be/~philippe/rad.html

and, at the moment, only appears to run on Silicon Graphics machines.

Page 93: Computer Graphics and Visualisation - Lighting and Shading
Page 94: Computer Graphics and Visualisation - Lighting and Shading

These materials have been produced as part of the Information Technology Training Initiative,funded by the Information Systems Committee of the Higher Education Funding Councils.

For further information on this and other modules please contact:

The ITTI Gravigs Project, Computer Graphics Unit, Manchester Computing.Tel: 0161 275 6095 Email: [email protected]

To order additional copies of this or other modules please contact:

Mrs Jean Burgan, UCoSDA.Tel:0114 272 5248 Email: [email protected]

ISBN 1 85889 064 0