Monte Carlo Integration - 15-462/662 Spring...

Post on 28-Jun-2020

2 views 0 download

Transcript of Monte Carlo Integration - 15-462/662 Spring...

Computer Graphics

CMU 15-462/15-662, Fall 2016

Monte Carlo Integration

Talk Announcement

Jovan Popovic, Senior Principal Scientist at Adobe

Research will be giving a seminar on โ€œCharacter Animatorโ€

-- Monday October 24, from 3-4 in NSH 1507.

From last classโ€ฆ

x

p

What we really want is to solve the reflection equation:

๐œ”๐‘œ

๐œ”๐‘–1

๐œ”๐‘–2N

Reflections

Review: fundamental theorem of calculus

๐น(๐‘Ž)

๐น(๐‘)

?

If f is continuous over [a, b] and F is defined over [a, b] as:

then:

and

Definite integral: โ€œarea under curveโ€

Simple case: constant function

Affine function:

Piecewise affine function

Sum of integrals of individual affine components

Piecewise affine function

If N-1 segments are of equal length:

Weighted

combination of

measurements.

Arbitrary function f(x)?

Quadrature rule: an approximation of the definite integral of a

function, usually stated as a weighted sum of function values at

specified points within the domain of integration.

Trapezoidal rule

Approximate integral of f(x) by assuming function is

piecewise linear.

For equal length segments:

Trapezoidal rule

Consider cost and accuracy of estimate as

(or )

Work: Error can be shown to be:

(for f(x) with continuous second derivative)

Integration in 2D

Consider integrating using the trapezoidal rule

(apply rule twice: when integrating in x and in y)

First application of rule

Second application

Must do much more work in 2D (n x n set of measurements) to get same

error bound on integral! Rendering requires computation of infinite

dimensional integrals โ€“ more soon!

Monte Carlo Integration

A first exampleโ€ฆ

เถต๐‘“ ๐‘ฅ, ๐‘ฆ ๐‘‘๐‘ฅ๐‘‘๐‘ฆ

๐‘“ ๐‘ฅ, ๐‘ฆ = แ‰Š1, ๐‘ฅ2 + ๐‘ฆ2 < 10, ๐‘œ๐‘กโ„Ž๐‘’๐‘Ÿ๐‘ค๐‘–๐‘ ๐‘’

Consider the following integral:

where

What does this integral โ€œmeanโ€?

Weโ€™ll come back to thisโ€ฆ

A brief intro to probability theoryโ€ฆ

A random variable ๐’™ is a quantity whose value depends on a

set of possible random events. A random variable can take

on different values, each with an associated probability.

๐’™ can take on values 1, 2, 3, 4,

5 or 6 with equal probability*

Random variables can be discrete or continuous

๐’™ is continuous and 2-D.

Probability of bulls-eye

depends on who is throwing.

Discrete random variables

๐’™ can take one of n discrete values ๐’™๐’Š, each with

probability ๐’‘๐’Š

It must always be the case that

What is the average (or expected) value ๐’™will take

after many, many experiments?

๐ธ ๐’‡(๐’™) =

๐’Š=๐Ÿ

๐’

๐’‡(๐’™๐’Š)๐’‘๐’Š๐ธ ๐’™ =

๐’Š=๐Ÿ

๐’

๐’™๐’Š๐’‘๐’Š

Q: What is the expected value of rolling a die?

Useful properties of Expected Value

๐ธ ๐’™ + ๐’š = ๐ธ ๐’™ + ๐ธ(๐’š)

๐ธ ๐’™ โˆ— ๐’š = ๐ธ ๐’™ โˆ— ๐ธ(๐’š)

โ€œexpected value of sum is sum of expected valuesโ€

โ€œexpected value of product is product of expected valuesโ€

Let ๐’™ and ๐’š be independent random variables or

functions of independent random variables

Q1: What does it mean that two random variables are independent?

Q2: Whatโ€™s an example of two random variables that are correlated?

Variance of a discrete random variable

โ€œHow far from the expected value can we expect the

outcome of each instance of the experiment to beโ€

๐‘‰ ๐’™ =

๐’Š=๐Ÿ

๐’

๐’™๐’Š โˆ’ ๐‘ฌ ๐’™๐Ÿ๐’‘๐’Š = ๐ธ ๐’™๐’Š โˆ’ ๐ธ ๐’™

๐Ÿ

= ๐ธ ๐’™๐Ÿ โˆ’ ๐ธ ๐’™ ๐Ÿ Q: What are some advantages

of this expression?

๐‘‰ ๐’™ + ๐’š = ๐‘‰ ๐’™ + ๐‘‰(๐’š)โ€œvariance of sum is sum of variancesโ€

If ๐’™ and ๐’š are independent random variables or

functions of independent random variables:

Continuous random variables

๐‘ฅ can take infinitely many values according to a

probability density function ๐‘(๐‘ฅ)

It must always be the case that

๐‘ ๐‘ฅ โ‰ฅ 0 เถฑ

โˆ’โˆž

โˆž

๐‘ ๐‘ฅ ๐‘‘๐‘ฅ = 1

๐‘๐‘Ÿ ๐‘ฅ = ๐‘Ž = 0

๐‘๐‘Ÿ ๐‘Ž โ‰ค ๐‘ฅ โ‰ค ๐‘ = เถฑ

๐‘Ž

๐‘

๐‘ ๐‘ฅ ๐‘‘๐‘ฅProbability of specific

result for an experiment

Continuous random variables

Cumulative Distribution Function

๐ถ๐ท๐น ๐‘ = เถฑ

โˆ’โˆž

๐‘

๐‘ ๐‘ฅ ๐‘‘๐‘ฅ = 1

0

1

๐‘๐‘Ÿ ๐‘Ž โ‰ค ๐‘ฅ โ‰ค ๐‘ = ๐ถ๐ท๐น ๐‘ โˆ’ ๐ถ๐ท๐น(๐‘Ž)

Cumulative distribution function for

discrete random variables

CDF:

PDF:

Discrete vs Continuous RVs

Concepts from discrete case carry over to continuous

case. Just replace sums with integrals as needed!

๐ธ ๐’‡(๐’™) =

๐’Š=๐Ÿ

๐’

๐’‡(๐’™๐’Š)๐’‘๐’Š ๐ธ ๐’™ = เถฑ

๐’‚

๐’ƒ

๐’‡(๐’™)๐’‘ ๐’™ ๐’…๐’™

One more experiment

โ–ช Assume ๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ๐‘ are independent random variables

- e.g. repeating the same experiment over and over

โ–ช Let ๐บ ๐‘ฟ =1

๐‘ฯƒ๐’‹=๐Ÿ๐‘ต ๐‘” ๐‘ฅ๐‘—

- e.g. average score after you throw 10 darts

โ€œexpected value of sum is sum of expected valuesโ€

๐ธ ๐บ ๐‘ฟ = ๐‘ฌ1

๐‘

๐’‹=๐Ÿ

๐‘ต

๐‘” ๐‘ฅ๐‘— = ๐ธ ๐‘” ๐‘ฅ = ๐บ(๐‘ฟ)

Expected value of average of N trials is the same as the expected value of 1 trial,

is the same as average of N trials!

Monte Carlo Integration

โ–ช Want to estimate the integral:

โ–ช Make an estimate:

โ–ช Assume for now a uniform probability distribution

- How do you randomly choose a point ๐’™๐Ÿ in

interval [a, b]?

- What is the value of ๐’‘(๐’™๐Ÿ)?

- What does estimate look like for a constant

function?

๐ผ = เถฑ

๐’‚

๐’ƒ

๐’‡(๐’™) ๐’…๐’™

แˆš๐ผ1 =๐’‡ ๐’™๐Ÿ๐’‘ ๐’™๐Ÿ

= ๐’ˆ(๐’™๐Ÿ)

Monte Carlo Integration

โ–ช Want to estimate the integral:

โ–ช Make an estimate:

โ–ช What is the expected value of the estimate?

โ–ช So weโ€™re doneโ€ฆ

๐ผ = เถฑ

๐’‚

๐’ƒ

๐’‡(๐’™) ๐’…๐’™

แˆš๐ผ1 =๐’‡ ๐’™๐Ÿ๐’‘ ๐’™๐Ÿ

= ๐’ˆ(๐’™๐Ÿ)

๐ธ( แˆš๐ผ1) = เถฑ

๐’‚

๐’ƒ๐’‡ ๐’™

๐’‘ ๐’™๐’‘(๐’™)๐’…๐’™ = ๐ผ = ๐‘”(๐’™๐Ÿ)

Monte Carlo Integration

โ–ช Not so fastโ€ฆ

- This is like trying to decide based on one toss if

coin is fair or biasedโ€ฆ

โ–ช Why is it that you expect to get better estimates by

running more trials (๐ข. ๐ž. แˆš๐ผ๐‘)?

- expected value does not changeโ€ฆ

โ–ช Look at variance of estimate after N trials:

แˆš๐ผ๐‘ =

๐’Š=๐Ÿ

๐‘ต

๐’ˆ(๐’™๐’Š)

Part 2 of last experiment

โ–ช Assume ๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ๐‘ are independent random variables

- e.g. repeating the same experiment over and over

โ–ช Let ๐บ ๐‘ฟ =1

๐‘ฯƒ๐’‹=๐Ÿ๐‘ต ๐‘” ๐‘ฅ๐‘—

- e.g. average score after you throw 10 darts

โ€œvariance of sum is sum of variances (assuming independent RVs)โ€

๐‘‰(๐บ ๐‘ฟ ) = ๐‘‰1

๐‘

๐’‹=๐Ÿ

๐‘ต

๐‘” ๐‘ฅ๐‘— =1

๐‘๐‘‰(๐‘” ๐‘ฅ )

Variance of N averaged trials decreases linearly with N as compared to variance

of each trial!

Monte Carlo Integration

โ–ช In 3 easy steps:

- Define a probability distribution to draw samples from

- Evaluate integrand

- Estimate is weighted average of function samples

Q: how do we get the variance of the estimate to decrease?

A: Increase N, or decrease ๐‘‰(๐‘” ๐‘ฅ )

เถฑ

๐‘Ž

๐‘

๐‘“(๐‘ฅ)๐‘‘๐‘ฅ โ‰ˆ1

๐‘

๐‘–=1

๐‘๐‘“ ๐‘ฅ๐‘–๐‘ ๐‘ฅ๐‘–

A Variance Reduction Method

๐’ˆ(๐’™) =๐’‡ ๐’™

๐’‘ ๐’™

๐’‘๐Ÿ ๐’™ =1

๐‘ โˆ’ ๐‘Ž

๐’‘๐Ÿ ๐’™ =โ€ฆ

Non-uniform (biased) sampling can reduce variance of ๐’ˆ(๐’™)

If PDF is proportional to f, g will have zero variance! Only one

sample needed! A free lunch?

How do we generate samples according

to an arbitrary probability distribution?

The discrete case

Select if falls in its

CDF bucket

Uniform random variable

Consider a loaded dieโ€ฆ

The continuous case

0

1

Exactly the same idea!

But must know the inverse of the

Cumulative Distribution Function!

And if we donโ€™t?

๐‘ฅ = ๐ถ๐ท๐น1 ๐œ‰

Rejection Sampling

๐‘(๐‘ฅ)

๐‘€do forever {

sample u in [0, M]

sample t in [a, b]

if (u <= p(t))

return t;

}

๐‘(๐‘ก)

๐‘ก

๐‘ข

Uniformly sampling unit disk: a first try

โ–ช = uniform random angle between 0 and

โ–ช = uniform random radius between 0 and 1

โ–ช Return point:

This algorithm does not produce uniform samples

on this 2d disk. Why?

Uniformly sampling unit disk

WRONG

Not Equi-areal

RIGHT

Equi-areal

Uniform sampling of unit disk via rejection sampling

do {

x = 1 - 2 * rand01();

y = 1 - 2 * rand01();

} while (x*x + y*y > 1.);

Generate random point within

unit circle

Sampling 2D directions

x = 1 - 2 * rand01();

y = 1 - 2 * rand01();

r = sqrt(x*x+y*y);

x_dir = x/r;

y_dir = y/r;

Goal: generate random

directions in 2D with

uniform probability

This algorithm is not correct. What is wrong?

Rejection sampling for 2D directions

do {

x = 1 - 2 * rand01();

y = 1 - 2 * rand01();

} while (x*x + y*y > 1.);

r = sqrt(x*x+y*y);

x_dir = x/r;

y_dir = y/r;

Goal: generate random

directions in 2D with

uniform probability

Back to our problem: lighting estimate

x

p

What we really want is to solve the reflection equation:

๐œ”๐‘œ

๐œ”๐‘–1

๐œ”๐‘–2

N

Back to our problem: lighting estimate

MC Estimator:

Want to integrate reflection term over hemisphere

- using uniformly sampled directions

Light source

Occluder

(blocks light)

An example sceneโ€ฆ

Hemispherical solid angle

sampling, 100 sample rays(random directions drawn uniformly

from hemisphere)

Why is the image โ€œnoisyโ€?

Lighting estimator uses different

random directions for each pixel.

Some of those directions point

towards the light, others do not.

How can we make it better?

โ€œBiasingโ€ sample selection for rendering applications

โ–ช Note: โ€œbiasingโ€ selection of random samples is

different than creating a biased estimator!

โ–ช Variance reduction method, but can think of it

also as importance sampling

Importance sampling example

Cosine-weighted hemisphere sampling in irradiance estimate:

Note: Samples along the hemisphere must also be

drawn according to this probability distribution!

Summary: Monte Carlo integration

โ–ช Estimate value of integral using random sampling

- Why is it important that the samples are truly random?

- Algorithm gives the correct value of integral โ€œon averageโ€

โ–ช Only requires function to be evaluated at random points on

its domain

- Applicable to functions with discontinuities, functions that are

impossible to integrate directly

โ–ช Error of estimate independent of dimensionality of integrand

- Faster convergence in estimating high dimensional integrals than

non-randomized quadrature methods

- Suffers from noise due to variance in estimate

- more samples & variance reduction method can help