EECS 274 Computer Vision

29
EECS 274 Computer Vision Cameras

description

EECS 274 Computer Vision. Cameras. Cameras. Camera models Pinhole Perspective Projection Affine Projection Spherical Perspective Projection Camera with lenses Sensing Human eye Reading: FP Chapter 1, S Chapter 2. They are formed by the projection of 3D objects. - PowerPoint PPT Presentation

Transcript of EECS 274 Computer Vision

Page 1: EECS 274 Computer Vision

EECS 274 Computer Vision

Cameras

Page 2: EECS 274 Computer Vision

Cameras• Camera models

– Pinhole Perspective Projection– Affine Projection– Spherical Perspective Projection

• Camera with lenses• Sensing• Human eye• Reading: FP Chapter 1, S Chapter 2

Page 3: EECS 274 Computer Vision

Images are two-dimensional patterns of brightness values.

They are formed by the projection of 3D objects.

Figure from US Navy Manual of Basic Optics and Optical Instruments, prepared by Bureau of Naval Personnel. Reprinted by Dover Publications, Inc., 1969.

Page 4: EECS 274 Computer Vision

Animal eye: a looonnng time ago.

Pinhole perspective projection: Brunelleschi, XVth Century.Camera obscura: XVIth Century.

Photographic camera:Niepce, 1816.

Reproduced by permission, the American Society of Photogrammetry andRemote Sensing. A.L. Nowicki, “Stereoscopy.” Manual of Photogrammetry,Thompson, Radlinski, and Speert (eds.), third edition, 1966. Figure from US Navy

Manual of Basic Optics and Optical Instruments, prepared by Bureau of Naval Personnel. Reprinted by Dover Publications, Inc., 1969.

Page 5: EECS 274 Computer Vision

Parallel lines: converge on A line formed by the intersection of a plane parallel to π and image plane

L in π that is parallel to image plane has no image at all

A is half the size of BC is half the size of B

Page 6: EECS 274 Computer Vision

Vanishing point

Page 7: EECS 274 Computer Vision

Vanishing point

The lines all converge in his right eye, drawing the viewers gaze to this place.

Page 8: EECS 274 Computer Vision

Pinhole Perspective Equation

zyfy

zxfx

zf

yy

xx

zfyyxx

''

''

'''

'''

NOTE: z is always negative

• C’ :image center• OC’ : optical axis• π’ : image plane is at a positive distance f’ from the pinhole• OP’= λ OP

Page 9: EECS 274 Computer Vision

Affine projection models: Weak perspective projection

0

'where''

zfm

myymxx

is the magnification.

When the scene relief (depth) is small compared its distance from thecamera, m can be taken constant: weak perspective projection.

frontal-parallel plane π0 defined by z=z0

Page 10: EECS 274 Computer Vision

Affine projection models: Orthographic projection

yyxx

'' When the camera is at a

(roughly constant) distancefrom the scene, take m=1.

Page 11: EECS 274 Computer Vision

Planar pinhole perspective

Orthographicprojection

Spherical pinholeperspective

Page 12: EECS 274 Computer Vision

Pinhole too big - many directions are averaged, blurring the image

Pinhole too small- diffraction effects blur the image

Generally, pinhole cameras are dark, becausea very small set of raysfrom a particular pointhits the screen.

Page 13: EECS 274 Computer Vision

Lenses

Snell’s law (akaDescartes’ law)

n1 sin 1 = n2 sin 2

n: index of refraction

reflection

refraction

Page 14: EECS 274 Computer Vision

Paraxial (or first-order) optics

Snell’s law:

n1 sin 1 = n2 sin 2

Small angles:

n11 = n22

Page 15: EECS 274 Computer Vision

Paraxial (or first-order) optics

Small angles:

n11 = n22 Rnn

dn

dn 12

2

2

1

1

222

111

dh

Rh

dh

Rh

Page 16: EECS 274 Computer Vision

Thin Lenses

)1(2 and11

'1 e wher

''

''

nRf

fzzzyzy

zxzx

f: focal length F, F’: focal points

Page 17: EECS 274 Computer Vision

Depth of field and field of view• Depth of field (field of focus): objects

within certain range of distances are in acceptable focus– Depends on focal length and aperature

• Field of view: portion of scene space that are actually projected onto camera sensors– Not only defined by focal length– But also effective sensor area

Page 18: EECS 274 Computer Vision

Depth of field

• Changing the aperture size affects depth of field– A smaller aperture increases the range in which the

object is approximately in focus

f / 5.6

f / 32

Page 19: EECS 274 Computer Vision

Thick lenses

• Simple lenses suffer from several aberrations

• First order approximation is not sufficient• Use 3rd order Taylor approximation

Page 20: EECS 274 Computer Vision

Orthographic (“telecentric”) lenses

http://www.lhup.edu/~dsimanek/3d/telecent.htm

Navitar telecentric zoom lens

Page 21: EECS 274 Computer Vision

Correcting radial distortion

from Helmut Dersch

Page 22: EECS 274 Computer Vision

SphericalAberration•rays do not intersect at one point•circle of least confusion

Distortion

ChromaticAberrationrefracted rays of different wavelengths intersect the optical axis at different points

Figure from US Navy Manual of Basic Optics and Optical Instruments, prepared by Bureau of Naval Personnel. Reprinted by Dover Publications, Inc., 1969.

pincushion barrel

Page 23: EECS 274 Computer Vision

Vignetting

• Aberrations can be minimized by well-chosen shapes and refraction indexes, separated by appropriate stops• However, light rays from object points off-axis are partially blocked by lens configuration vignetting brightness drop in the image periphery

Page 24: EECS 274 Computer Vision

The Human Eye

Helmoltz’s SchematicEye

Reproduced by permission, the American Society of Photogrammetry andRemote Sensing. A.L. Nowicki, “Stereoscopy.” Manual of Photogrammetry,Thompson, Radlinski, and Speert (eds.), third edition, 1966.

Corena: transparent highly curved refractive componentPupil: opening at center of iris in response to illumination

Page 25: EECS 274 Computer Vision

Retina: thin, layered membrane with two types of photoreceptors

• rods: very sensitive to light but poor spatial detail• cones: sensitive to spatial details but active at higher light level • generally called receptive field

Reprinted from Foundations of Vision, by B. Wandell, Sinauer Associates, Inc., (1995). 1995 Sinauer Associates, Inc.

Cones in the fovea

Rods and cones in the periphery

Reprinted from Foundations of Vision, by B. Wandell, Sinauer Associates, Inc., (1995). 1995 Sinauer Associates, Inc.

Page 26: EECS 274 Computer Vision

Photographs (Niepce, “La Table Servie,” 1822)

Milestones: Daguerreotypes (1839)Photographic Film (Eastman,1889)Cinema (Lumière Brothers,1895)Color Photography (LumièreBrothers, 1908)Television (Baird, Farnsworth,Zworykin, 1920s)

CCD Devices (1970)

Collection Harlingue-Viollet. .

Page 27: EECS 274 Computer Vision

360 degree field of view…

• Basic approach– Take a photo of a parabolic mirror with an orthographic lens (Nayar)– Or buy one a lens from a variety of omnicam manufacturers…

• See http://www.cis.upenn.edu/~kostas/omni.html

Page 28: EECS 274 Computer Vision

Digital camera

• A digital camera replaces film with a sensor array– Each cell in the array is a Charge Coupled Device

• light-sensitive diode that converts photons to electrons• other variants exist: CMOS is becoming more popular• http://electronics.howstuffworks.com/digital-camera.htm

Page 29: EECS 274 Computer Vision

Image sensing pipeline