EECS 274 Computer Vision

36
EECS 274 Computer Vision Sources, Shadows, and Shading

description

EECS 274 Computer Vision. Sources, Shadows, and Shading. Surface brightness. Depends on local surface properties (albedo), surface shape (normal), and illumination Shading model: a model of how brightness of a surface is obtained Can interpret pixel values to reconstruct its shape and albedo - PowerPoint PPT Presentation

Transcript of EECS 274 Computer Vision

Page 1: EECS 274 Computer Vision

EECS 274 Computer Vision

Sources, Shadows, and Shading

Page 2: EECS 274 Computer Vision

Surface brightness

• Depends on local surface properties (albedo), surface shape (normal), and illumination

• Shading model: a model of how brightness of a surface is obtained

• Can interpret pixel values to reconstruct its shape and albedo

• Reading: FP Chapter 2, H Chapter 11

Page 3: EECS 274 Computer Vision

Radiometric properties• How bright (or what color)

are objects?• One more definition:

Exitance of a light source is– the internally generated

power (not reflected) radiated per unit area on the radiating surface

• Similar to radiosity: a source can have both– radiosity, because it reflects– exitance, because it emits

• Independent of its exit angle

• Internally generated energy radiated per unit time, per unit area

• But what aspects of the incoming radiance will we model?– Point, line, area source– Simple geometry

dPLPE oe 00 cos),,()(

Page 4: EECS 274 Computer Vision

Radiosity due to a point sources

2

r

Page 5: EECS 274 Computer Vision

Radiosity due to a point source

id

id

Er

PB

cos

cos termExitanceangle solid2

• As r is increased, the rays leaving the surface patch and striking the sphere move closer evenly, and the collection changes only slightly, i.e., diffusive reflectance, or albedo

• Radiosity due to source

Page 6: EECS 274 Computer Vision

Nearby point source model

• The angle term, can be written in terms of N and S• N: surface normal

• ρd: diffuse albedo

• S: source vector - a vector from P to the source, whose length is the intensity term, ε2E– works because a dot-product is basically a cosine

2

2

cos

cos termExitanceangle solid

Pr

PSPNP

Er

PB

d

id

id

Page 7: EECS 274 Computer Vision

Point source at infinity

• Issue: nearby point source gets bigger if one gets closer– the sun doesn’t for any

reasonable assumption

• Assume that all points in the model are close to each other with respect to the distance to the source

• Then the source vector doesn’t vary much, and the distance doesn’t vary much either, and we can roll the constants together to get:

SPNPPB d )(

2)(

Pr

PSPNPPB d

SN

r

SN

r

rPPSN

r

SN

Prr

PSSN

Pr

PSPN

20

0

02

0

0

20

02

))((21

))((

))((

Page 8: EECS 274 Computer Vision

Line sources

radiosity due to line source varies with inverse distance, if the source is long enough

Infinitely long narrow cylinder with constant exitance

Page 9: EECS 274 Computer Vision

Area sources

• Examples: diffuser boxes, white walls

• The radiosity at a point due to an area source is obtained by adding up the contribution over the section of view hemisphere subtended by the source – change variables and add

up over the source

Page 10: EECS 274 Computer Vision

Radiosity due to an area source

• ρd is albedo

• E is exitance• r is distance

between points Q and P

• Q is a coordinate on the source

source

Qsi

d

Qsi

source

d

id

ied

iid

dAr

QEP

r

dAQEp

dQE

P

dQPQLP

dQPPLPPB

2

2

coscos

coscos

cos

cos,

cos,

)(1

)( QLeQE

Page 11: EECS 274 Computer Vision

Shading models

• Local shading model– Surface has radiosity due

only to sources visible at each point

– Advantages:• often easy to

manipulate, expressions easy

• supports quite simple theories of how shape information can be extracted from shading

• Global shading model– Surface radiosity is due to

radiance reflected from other surfaces as well as from surfaces

– Advantages:• usually very accurate

– Disadvantage:• extremely difficult to

infer anything from shading values

Page 12: EECS 274 Computer Vision

Local shading models

• For point sources at infinity:

• For point sources not at infinity

Pssd

Pss

SPNP

PBPB

from visiblesources

from visiblesources

)()(

)()(

Ps sd Pr

PSPNPPB

from visiblesources2)(

)()()()(

Page 13: EECS 274 Computer Vision

Shadows cast by a point source• A point that can’t see the

source is in shadow (self cast shadow)

• For point sources, the geometry is simple (i.e., the relationship between shape and shading is simple)

• Radiosity is a measurement of one component of the surface normal

Pssd

Pss

SPNP

PBPB

from visiblesources

from visiblesources

)()(

)()(

Analogous to the geometry of viewing in a perspective camera

Page 14: EECS 274 Computer Vision

Area source shadows

Are sources do not produce darkshadows with crisp boundaries

1.Out of shadow2.Penumbra (“almost shadow”)3.Umbra (“shadow”)

Page 15: EECS 274 Computer Vision

Photometric stereo

• Assume:– A local shading model– A set of point sources that are infinitely

distant– A set of pictures of an object, obtained in

exactly the same camera/object configuration but using different sources

– A Lambertian object (or the specular component has been identified and removed)

Page 16: EECS 274 Computer Vision

Projection model for surface recovery - Monge patch

In computer vision, it is often known as height map, depth map, or dense depth map

Monge patch

Page 17: EECS 274 Computer Vision

Image model

• For each point source, we know the source vector (by assumption)

• We assume we know the scaling constant of the linear camera (i.e., intensity value is linear in the surface radiosity)

• Fold the normal and the reflectance into one vector g, and the scaling constant and source vector into another Vj

• Out of shadow:

• g(x,y): describes the surface

• Vj: property of the illumination and of the camera

• In shadow:

j

j

j

j

Vyxg

kSyxNyx

SyxNyxk

yxkByxI

),(

)(),(),(

)),()(,(

),(),(

0),( yxI j

Page 18: EECS 274 Computer Vision

From many views

• From n sources, for each of which Vi is known

• For each image point, stack the measurements

• Solve least squares problem to obtain g

Tn

T

T

V

V

V

V

2

1

Tn yxIyxIyxIyxi )),(,),,(),,((),( 21

),(

),(

),(

),(

2

1

2

1

yxg

V

V

V

yxI

yxI

yxI

Tn

T

T

n

),(),( yxV gyxi

One linear system per point

j

j

j

j

Vyxg

kSyxNyx

SyxNyxk

yxkByxI

),(

)(),(),(

)),()(,(

),(),(

Page 19: EECS 274 Computer Vision

Dealing with shadows

Known Known Known Unknown

),(

),(00

0

),(0

00),(

),(

),(

),(

2

1

2

1

2

22

21

yxg

V

V

V

yxI

yxI

yxI

yxI

yxI

yxI

Tn

T

T

nn

),(

),(

),(

),(

2

1

2

1

yxg

V

V

V

yxI

yxI

yxI

Tn

T

T

n

),(),( yxV gyxi

Page 20: EECS 274 Computer Vision

Recovering normal and reflectance

• Given sufficient sources, we can solve the previous equation (e.g., least squares solution) for g(x, y)

• Recall that g(x, y) =(x,y) N(x, y) , and N(x, y) is the unit normal

• This means that alberdo x,y) =||g(x, y)||• This yields a check

– If the magnitude of g(x, y) is greater than 1, there’s a problem

• And N(x, y) = g(x, y) / x,y)

Page 21: EECS 274 Computer Vision

Five synthetic images

Generated from a sphere in a orthographic view from the same viewing position

Page 22: EECS 274 Computer Vision

Recovered reflectance

||g(x,y)||=ρ(x,y): the alberdo value should be in the range of 0 and 1

Page 23: EECS 274 Computer Vision

Recovered normal field

For viewing purpose, vector field is shown for every 16th pixel in each direction

Page 24: EECS 274 Computer Vision

Parametric surface tangents

u

v

x

u

x

v

Page 25: EECS 274 Computer Vision

Shape from normals

• Recall the surface is written as

• Parametric surface

• This means the normal has the form:

• If we write the known vector g as

• Then we obtain values for the partial derivatives of the surface:

)),(,,( yxfyx

11

1),(

22 y

x

yx

f

f

ffyxN

),(

),(

),(

),(

3

2

1

yxg

yxg

yxg

yxg

),(/),(),(

),(/),(),(

32

31

yxgyxgyxf

yxgyxgyxf

y

x

kji

kjki

y

f

x

frrN

y

fr

x

fr

yxfzyyxx

vu

vu ,

),(,,

Page 26: EECS 274 Computer Vision

Shape from normals• Recall that mixed second partials are equal --- this gives us a

check. We must have:

(or they should be similar, at least)• Known as integrability test• We can now recover the surface height at any point by

integration along some path, e.g.

x

yxgyxg

y

yxgyxg

)),(/),(()),(/),(( 3231

v u

xy cdxvxfdyyfvuf0 0

),(),0(),(

Page 27: EECS 274 Computer Vision

Recovered surface by integration

v u

xy cdxvxfdyyfvuf0 0

),(),0(),(

Page 28: EECS 274 Computer Vision

The illumination cone

N-dimensional Image Space

x1

x2

What is the set of n-pixel images of an object under all possible lighting conditions (at fixed pose)? (Belhuemuer and Kriegman IJCV 99)

xn

Single light source image

Page 29: EECS 274 Computer Vision

The illumination cone

N-dimensional Image Space

x1

x2

What is the set of n-pixel images of an object under all possible lighting conditions (but fixed pose)?

Illumination Cone

Proposition: Due to the superposition of images, the set of images is a convex polyhedral cone in the image space.

xn

Single light source images:Extreme rays of cone

2-light source image

Page 30: EECS 274 Computer Vision

For Lambertian surfaces, the illumination cone is determined by the 3D linear subspace B(x,y), where

When no shadows, then Use least-squares to find 3D linear subspace, subject to the constraint fxy=fyx (Georghiades,

Belhumeur, Kriegman, PAMI, June, 2001)

Original (Training) Images

x,y fx(x,y) fy(x,y) albedo (surface normals) Surface. f (x,y) (albedo

textured mapped on surface)

3D linear subspace

Generating the illumination cone

i

ii

i syxBsyxnyxyxI )0,),(max()0,),(),(max(),(

i

isyxByxI ),(),(

Page 31: EECS 274 Computer Vision

Single Light Source Face Movie

Image-based rendering: Cast shadows

Page 32: EECS 274 Computer Vision

Yale face database B

• 10 Individuals• 64 Lighting Conditions• 9 Poses => 5,760 Images

Variable lighting

Page 33: EECS 274 Computer Vision

Limitation

• Local shading model is a poor description of physical processes that give rise to images– because surfaces reflect light onto one another

• This is a major nuisance; the distribution of light (in principle) depends on the configuration of every radiator; big distant ones are as important as small nearby ones (solid angle)

• The effects are easy to model• It appears to be hard to extract information

from these models

Page 34: EECS 274 Computer Vision

Interreflections - a global shading model• Other surfaces are now area sources -

this yields:

• Vis(x, u) is 1 if they can see each other, 0 if they can’t

usi

d dAuxVisuxr

B(u)xxExB ),(),(

coscos)()()(

surfacesother todueRadiosity Exitance surface aat Radiosity

2

sourcesother all

Page 35: EECS 274 Computer Vision

What do we do about this?

• Attempt to build approximations– Ambient illumination

• Study qualitative effects– reflexes– decreased dynamic range– smoothing

• Try to use other information to control errors

Page 36: EECS 274 Computer Vision

Ambient illumination

• Two forms– Add a constant to the radiosity at every point in

the scene to account for brighter shadows than predicted by point source model

• Advantages: simple, easily managed (e.g. how would you change photometric stereo?)

• Disadvantages: poor approximation (compare black and white rooms

– Add a term at each point that depends on the size of the clear viewing hemisphere at each point

• Advantages: appears to be quite a good approximation, but jury is out

• Disadvantages: difficult to work with