Computational Photography - eecs.yorku.cambrown/EECS6323/lectures/06_EECS6323... · Brown 2 Lecture...
Transcript of Computational Photography - eecs.yorku.cambrown/EECS6323/lectures/06_EECS6323... · Brown 2 Lecture...
Brown 1
Computational Photography:
“Illumination”Part I
Brown 2
Lecture Topic
• Discuss the limits of the “dynamic range” in
current imaging and display technology
• Solutions
1. High Dynamic Range (HDR) Imaging
– Able to image a larger dynamic range of a scene
using multiple photos
2. Tone Mapping or Tone Reproduction
– Addresses how to display an HDR image
– Actually doesn’t overcome the display range, but
produces compelling “mappings” of HDR that fit into
the range of the display
• We call this process “tone mapping”
Brown 3
PapersWe will discuss two papers in this lecture:
1. Paul Debevec and Jitendra Malik
“Recovering High Dynamic Range Radiance Maps from Photographs”
- Paul is now a famous “Graphics guy”
- Prof. Malik has been a famous “Computer vision guy” for years
and
2. Erik Reinhard and others
“Photographic Tone Reproduction for Digital Images”
- Erik’s paper made Tone Mapping a “hot topic” again
- Mainly because Paul made HDR realizable by photographs
- Erik recently authored a book on Tone Mapping
Brown 4
HDR• SIGGRAPH, 1997
– Paul Debevec; wrote paper while a student at Berkley, now Research A/Profand Assoc Director Graphics LabU of Southern California
• IdeaProblem with Film and Digital Cameras
They have limited dynamic range; have non-linear response curves to exposure (scene radiance)
Solution
Use multiple photos to recover the radiance of the scene.
Require us to compute the non-linear response curve of the imaging device
Result: able to determine a high dynamic range of “irradiance” falling on the sensor’s pixel . . .
Brown 5
Preliminary Terms
• Radiance
• Irradiance
Light emitting (or reflecting)
from a surface.
Only a light source would
emit light, most things reflect
light.
Amount of light falling onto
a surface.
Note, that radiance and irradiance are fundamentally different.
Radiance is measure in
watts per steradian per
square meter
Irradiance is measured as
watts per square meter
Brown 6
Scene Radiance
• Amount of radiance in a 3D scene varies greatly
Each point
is a different
radiance
reading
Brown 7
Problem
• Film and digital cameras cannot record the full dynamic range* of the radiance in a scene– Fundamental limitation of film
– Fundamental limitation of CCD sensor
• Photographer’s must make a decision– Set the exposure to capture a portion of the range of
the scene• Exposure too short: low radiance all map to 0
• Exposure too long: high radiance all map to 255 (max intensity)
*recall: dynamic range is the range of min-radiance to max-radiance
Brown 8
Example: Short exposure10-6 106
10-6 106
Real world
radiance
Picture
intensity
dynamic range
Pixel value 0 to 255
Short
Here we can see outside the
window, but the things in the
room are too dark.
Brown 9
Example: Long exposure10-6 106
10-6 106
Real world
radiance
Picture
intensity
dynamic range
Pixel value 0 to 255
LongerShort
We can see more things in
the room, but the scene
outside the window is too
bright.
Brown 10
Question Paul addressed:
How does radiance map
to a pixel value “z”?
12 bits 8 bits
1) 2) 3) 4)
5) 6) 7)Many steps from
the scene to the final
pixel value ‘z’.
Brown 11
Many Steps
12 bits 8 bits
1) 2) 3) 4)
5) 6) 7)
1. Scene generates radiance L
2. This can be attenuated through a lens, then
hits the imaging devices sensor (now we call
it irradiance, E)
3. E is exposed for Δt seconds
The product (E • Δt) is the exposure
4. Film has a response curve to E • Δt
This response is often not linear;
The development process may also not
be linear.
5. If we are using a digital camera, the CCD
response is linear!
6. However, this response is quantized
7. And typically (almost always) “6” is remapped
through a non-curve to behave like film, so
even though the CCD is linear, we get back
a non-linear response!
Brown 12
Non-linear Response Curve
• Film/Digital cameras have non-linear responses in terms of exposure (E•Δt)– For a variety of reasons (see paper)
• The question is, can we find the response curve as follows:
Zij = f(Ei•Δtj)
Where Zij is the final pixel value (from 0-255) at
pixel i, Ei is the irradiance at i, and Δtj is the shutter speed.
Thus: Ei•Δtj is the exposure of light on pixel i
Brown 13
How to Change Exposure
• Remember: Ways to change exposure
– Shutter speed
– Aperture
– Natural density filters
We will use “shutter speed”, but there are other options.
Follow-on papers used different techniques (filters).
Brown 14
Exposure and mapping
There is a point where too much exposure saturates the
CCD (or film) and we get a peak. . . (255 white)
Brown 15
Varying shutter speeds
Brown 16
Saturation, but other pixels OK
Not saturated at lower exposures, but other pixels too dark
Brown 17
Idea of the paper
Recall:
1. ‘E’ (irradiance) doesn’t change, it is the same at pixel i in all photo
taken of the same scene in the same position
2. Amount of light over time (exposure), E•Δt, does change, based
on Δt.
But we know Δt, it’s the shutter speed
3. We also know ‘z’, is the pixel value - this is the image we get
4. SO, we need to solve for ‘f’, actually, we solve for f-1
Once we have f, we can solve for E (I’ve dropped the subscripts):
E•Δt = f-1(z) -> E = f-1(z)/Δt
Ei is irradiance falling on pixel i
Δtj is the shutter open time for a setting j
Zij is a pixel response at pixel location i, given exposure time j
Brown 18
Math for recovering response curve
This is a regularization term –
makes the solution smooth
where
Brown 19
Idea behind the math
Pixel ‘x’, ‘+’, ‘o’ are 3
different pixels under
going 5 exposure levels.
The are map to different
pixel values due to the E
falling on each pixel, and the
curve g.
g should be a smooth curve.
What is unknown? E.
Lets adjust the E’s so they make
a smooth response curve g.
Brown 20
Idea behind the math
Adjust ‘E’ of each pixel
so we get a smooth curve g.
Curve g
Adjust for x
Adjust for +
Adjust for o
Brown 21
Recovering response curve
• The solution can be only up to a scale, add a constraint
• Add a hat weighting function
Brown 22
Recovered response function
Recover each R, G, B channel separately.
Brown 23
Constructing HDR radiance map
combine pixels to reduce noise and obtain a
more reliable estimation
Brown 24
Reconstructed radiance map
This is the radiance
map of the scene.
Note the range is very
detailed (a floating point
image).
Dynamic range is:
0.005 to 121.741
Assume that 0.005 is the
minimum quantization size,
Then we have a range of
1 to 24349 – much higher than 0-255.
Brown 25
What is this for?
• Human perception
• Vision/graphics applications
Brown 26
A side note: Easier HDR reconstruction
raw image =
12-bit CCD snapshot
If we could get access to the RAW CCD output, it would be
easy to construct an HDR. Just use multiple exposures.
RAW CCD response is linear with exposure (more light,
more voltage)
Brown 27
HDR Summary
• This work made HDR practical and popular
– Debevec’s website has many useful links and
software
• Idea is quite simple
– Use multiple exposure to capture dynamic range
– Need to overcome cameras non-linear response
– Mathematical solution provided (code available)
– Paper is very well done
Brown 28
Tone Mapping• SIGGRAPH, 2002
– Eric Reinhard (German), PhD at Bristol (Britain)
– This paper while a post-doc at Utah (US)• Now back as a Lecturer at Bristol, after short time
at U. of Central Florida
• IdeaHDR is nice, but monitor is still limited
How can we map the HDR back to finite range
Idea considers how real photographers do this.
Some parts of his algorithm are inspired from real photographic methods proposed by the famous photography “Ansel Adams”
Brown 29
Motivation
Linear remapping of HDR
for display.
Erik’s remapping of HDR
for display.
Brown 30
Ansels Adams
Famous American photographer (known for high-contrast outdoor scenes).
Developed the “zone” system for photography.
http://en.wikipedia.org/wiki/Ansel_Adams
Brown 31
Zone System• Scene is divided into 11 Zones
– Each zone represents a level of “dynamic range”
As a photographer, you’d like to capture as many zones in
your photograph as possible. Adams says you need at
least 9 zones to capture the “detail of a scene”. More than
9 levels, you’ll get saturation or a dark image.
Brown 32
Zone Approach
Photographer selects region
that is “middle grey” (for a given exposure).
This is subjective: darker scenes
middle grey will be darker than
a lighter scene. To describe the scenes
we use the term “key”.
Measurement made with a “photometer”.
Photography selects the brightest
and darkest regions.
Measuring these regions on a
photometer gives an estimate of the
dynamic range.
Brown 33
Controlling range and tricks
• Using these readings compute range
– If you have nine zones in your range, you can capture the detail
– The middle gray should be roughly 18% brightness level of the
final output
• Photography can adjust the middle grey level
This card is printed to be gray, the photography
adjust the exposure until this gray becomes the desired
middle grey.
Brown 34
If you can’t fit all in – Dodge and Burn
• If the dynamic range is beyond 9 levels
– We will have regions too dark or too bright in
the image
• You can control the final result through
“dodging and burning”
– For film photography
– During development you control the light
through the negative, to make parts brighter
(burn) or darker (dodging)
Brown 35
Terms
Recap: Terms used in the paper:
• Zone: – 11 print zones related logarithmically to scene luminance and
sensor irradiance.
• Dynamic Range for Photographers: – We can use the zones to calc the difference between highest
and lowest scene zones (photographic dynamic range)
• Key:– Subjective measure of light (high key) or dark (low key).
• Dodging and Burning: – Print technique where more light is exposed to a region to dodge
or withhold light from that area or burn (darken).
Brown 36
Erik’s approach
Algorithm:
• Use the log-average luminance to find the
"key" of a scene
• Automatic dodging a burning (as in
photography): all portions of the print
receive difference exposure time
Brown 37
Tone Mapping
• Log Average:
• Scale Luminance's to a key:
a is called the “key value”: a=.18 would similar to what
Ansels would recommend.
Lw(x,y) is the “world” luminance – i.e. HDR data
Brown 38
Tone Mapping
Brown 39
Other mappings• Compress the high luminance:
• Burning high luminance in a controlled fashion:
Where L2white is the desired max white level.
Brown 40
Controlling Max-White
Brown 41
Spatially Varying OperatorDodging and Burning
• Typically applied to regions bounded by large contrasts
• The size of a local region is estimated using a measure of local
contrast; computed at multiple spatial scales
• At each spatial scale, a center-surround function is implemented by
subtracting two Gaussian blurred images.
• Gaussian profiles are of the form:
Brown 42
Spatially Varying Operators• Response function of image location, scale, and luminance distribution L:
• Center-surround function:
• a = key value, phi is the sharpening parameter
• Provides a local average of the luminance around (x,y) roughly
in a disc of radius s.
• V2 operates on a slightly larger area but same scale
Brown 43
Spatially Varying Operators
Brown 44
Spatially Varying Operators• To choose the largest neighborhood around a pixel with fairly even
luminance:
(start from the lowest scale and stop when this is satisfied)
• The global operator is converted to a local operator by replacing L
with V1
Brown 45
Spatially Varying Operators
Brown 46
Example
Brown 47
More Results
(see slide 6 to compare against Debevec’s linear scaling)
Brown 48
Scene Radiance
• Amount of radiance in a 3D scene varies greatly
Each point
is a different
radiance
reading
Brown 49
Tone Mapping Summary
• Consider real photographic techniques in
mapping radiance to finite range
• Allows people to think in photographic terms:
key, zone, middle grey
• Introduce local operators to control “burn and
dodge”
– Overcome effects of saturation or too dark
• Most previous tone-mapping approaches were
for computer generated results – not photos
Brown 50
Overall Summary
• Overcome limitations of traditional cameras– Traditional cameras are LDR.
– HDR overcomes range
• Work within the confines of displays– Tone Mapping
• (Note, this does not overcome limitation)
• But since the display is confined, we have to map the HDR back to LDR
• Future?– HDR becomes the standard?
– Better displays – HDR display? (its one its way)