hw1

15
James Yu EE362 Homework 1 Viewing Geometry (a) function space = dpi2space(dpi,d) dist = 1 / dpi; % compute distance between dots r = atan(dist / (2*d)); % radians of view space = 2*r*(360/(2*pi)) * 3600; % convert to seconds function dpi = space2dpi(s,d) r = (s/3600)*(2*pi/360); % convert from seconds to radians dist = 2*d*tan(r/2); dpi = 1 / dist; (b) Using my space2dpi function, I obtain 2864 dpi for 12 inches, and 954 dpi for 36 inches. (c) (a) When the pupil changes size, it acts like a pin hole camera. The height of the image stays constant, but the blurriness of the image is changed. This is evident in the following diagram:

Transcript of hw1

Page 1: hw1

James YuEE362Homework 1

Viewing Geometry

(a)

function space = dpi2space(dpi,d)dist = 1 / dpi; % compute distance between dotsr = atan(dist / (2*d)); % radians of viewspace = 2*r*(360/(2*pi)) * 3600; % convert to seconds

function dpi = space2dpi(s,d)r = (s/3600)*(2*pi/360); % convert from seconds to radiansdist = 2*d*tan(r/2);dpi = 1 / dist;

(b) Using my space2dpi function, I obtain 2864 dpi for 12 inches, and 954 dpi for 36 inches.

(c) (a) When the pupil changes size, it acts like a pin hole camera. The height of the image stays constant, but the blurriness of the image is changed. This is evident in the following diagram:

(b) angle = atan(200/400) = 26.5 degrees

Retina image

Page 2: hw1

(c) Using the lens equation, we obtain the image plane distance as 111mm behind the lens. The image will be .55 mm tall, and inverted. The following diagram illustrates:

(d) I already calculated the image to be .55 mm tall in part c.

(e) Using my matlab script, I obtain that 600 dpi is a dot spacing of about 28.64 arc seconds. This corresponds to about 125.66 dots per degree of visual angle.

(f) 0.4 m is about 15.7 inches, and 0.2 m is about 7.87 inches. Using this, we calculate that there are 1000 / 7.87 = 127 dpi. Now, I use my matlab script to convert dpi to visual angle spacing: 127 dpi = 103.4 arc seconds spacing between dots. Therefore, the number of pixels per degree of visual angle = 3600 / 103.4 = 34.8.

Image = .55 mm

Page 3: hw1

Point Spread calculations

Page 4: hw1
Page 5: hw1

Original Image

Page 6: hw1
Page 7: hw1

(c) In the difference image, I basically see a lot of high frequency details in the vertical direction (since the asymmetric pointspread function I chose had a wider dropoff in the vertical direction). This makes sense, since the person who had this function would have worse resolution in the vertical direction.

(d) We could instead convolve with the difference between the two pointspread functions. This utilizes the fact that the difference between the two convolutions is the same as convolving with the differences of the pointspread functions.

Chromatic Aberration – Simulation

(a) Looking at the chart, I estimate the peak wavelength sensitivity of the cones to be: S = 440nm, L = 580nm, M = 540nm. Referencing to the MTF, I estimate that the peak frequency detected by each cone type is: S = 3cpd, L = 25cpd, and M = 15cpd.

(b) The cone sampling will need to be at least twice the highest frequency possible that will appear at the input of each cone type. Therefore, S = 6 per degree, L = 50 per degree, and M = 30 per degree. These approximately correlate with the mosaic densities of each cone type.

(c) We note that the S-cones sample at about 3cpd. Thus, we will see aliasing for the 8cpd sinusoid.

Here are my original sinusoids:

Page 8: hw1

Degree

2 cpd sinusoid

0 1 2 3 4 5 6 7 8 9 10

0

10

20

30

40

50

60

70

80

90

Degree

8 cpd sinusoid

0 1 2 3 4 5 6 7 8 9 10

0

10

20

30

40

50

60

70

80

90

And, my sampled ones, with and without chromatic aberration (which scales it down):

Page 9: hw1

Degree

2 cpd sinusoid, sampled by S-cones, with Chromatic Aberration

0 1 2 3 4 5 6 7 8 9 10

0

10

20

30

40

50

60

70

80

90

Degree

2 cpd sinusoid, sampled by S-cones, without Chromatic Aberration

0 1 2 3 4 5 6 7 8 9 10

0

10

20

30

40

50

60

70

80

90

Page 10: hw1

Degree

8 cpd sinusoid, sampled by S-cones, with Chromatic Aberration

0 1 2 3 4 5 6 7 8 9 10

0

10

20

30

40

50

60

70

80

90

Degree

8 cpd sinusoid, sampled by S-cones, without Chromatic Aberration

0 1 2 3 4 5 6 7 8 9 10

0

10

20

30

40

50

60

70

80

90

Using the color matching functions

Page 11: hw1

(a) For the 550nm, the monitor linear intensities need to be : {-0.0030, 0.0151, -0.0015}

For the 430nm, the monitor linear intensities need to be: {0.0041, -0.0026, 0.0118}

(b)

0 50 100 150 200 250 300 350 400-0.01

-0.005

0

0.005

0.01

0.015

0.02scaled phosphor functions needed to reproduce nm550

phosphor1phosphor2phosphor3

Page 12: hw1

0 50 100 150 200 250 300 350 400-5

0

5

10

15

20x 10

-3 scaled phosphor functions needed to reproduce nm430

phosphor1phosphor2phosphor3

These are not physically realizable since we need negative coefficients on the phosphors. We cannot create negative light.

(c) We could do a color matching experiment. On the stimulus side, we have the monochromatic lights (550 nm and 430 nm). On the test side, the user will need to turn the knobs on the 3 phosphor lights (with the SPDs of the phosphors) until they match the monochromatic lights. The resulting phosphor coefficients should match what we got in part (a). We also note that we must allow the person to add the phosphor lights on the stimulus side, since we need negative lights.

(d) We would simply left multiply any xyz vector with:

0.0540 -0.0265 -0.0079-0.0114 0.0201 0.00030.0006 -0.0018 0.0084

This will obtain the linear intensities for this particular monitor.

Dichromacy

Page 13: hw1

(a) I think that dichromats will still see a color when presented with a stimulus for which they have a deficiency in cones for. For example, when a blue stimulus is presented to people who are missing S-cones, their M and L cones will still respond in the same linear way. But, as we know, they don’t respond very much. So therefore, they will perceive the blue to be much darker than we do, and won’t probably get the same “blue” sensation that we do (rather, a mix of red and green). Of course, this sensation of “blueness” is rather subjective, and, in all senses, could be blue to them.

The same goes for the people who don’t have enough L cones, they will perceive red to be rather green, and a little bit bluish. But in their case, green will dominate when they see red, since green and red’s functions are close.

(b) Basically, these people also believe pretty much what I talked about in part (a). Even though people who have red/green deficiency have trouble telling the difference between these two colors, they are still able to see colors.

(c) If someone is missing two cones, they basically see things in monochrome. Since they only have one cone, they have only one degree of freedom in their color viewing. They would only be able to “see” one type of color, and only shades of that color.

A good experiment would be the color matching experiment. Dichromats should be able to match any color with just two knobs, and monochromats should be able to with only one knob. After a few trials, we would be able to judge with a reasonable amount of certainty that a person has some sort of cone deficiency.

To get an idea of what colors look like to a cone deficient person, we would basically give them two (or one) knobs that control correlate to the cone types they have, and let them match colors. The colors they match with will most likely be how they perceive the stimulus.