Secure Computation (Lecture 5) Arpita Patra. Recap >> Scope of MPC > models of computation > network...

Post on 03-Jan-2016

229 views 0 download

Tags:

Transcript of Secure Computation (Lecture 5) Arpita Patra. Recap >> Scope of MPC > models of computation > network...

Secure Computation (Lecture 5)

Arpita Patra

Recap

>> Scope of MPC

> models of computation

> network models

> modelling distrust (centralized/decentralized adversary)

> modelling adversary

> Various Parameters/questions asked in MPC

>> Defining Security of MPC

> Ideal World & Real world

> Indistinguishability of the view of adversary in real and ideal (with the help of simulator) world

> Indistinguishability of the joint dist. of the output of the honest parties and the view of adversary in real and ideal (with the help of simulator) world.

Ideal World MPC

x1 x2

x3 x4

Any task

(y1,y2,y3,y4) = f(x1,x2,x3,x4)

Ideal World MPC

Any task

y1y2y4y3

The Ideal World

y1 y2

y4y3

The Real World

(y1,y2,y3,y4) = f(x1,x2,x3,x4) (y1,y2,y3,y4) = f(x1,x2,x3,x4)

x1 x2

x3 x4

x1 x2

x3x4

How do you compare Real world with Ideal World?

>> Fix the inputs of the parties, say x1,….xn >> Real world view of adv contains no more info than ideal world view

ViewReali : The view of Pi on input (x1,

….xn) - Leaked Values

{x3, y3, r3, protocol transcript}

The Real World

y1y2

y4

{ViewReali}Pi in C

{x3, y3}

y1y2

y4

The Ideal World

ViewIdeali : The view of Pi on input (x1,

….xn) - Allowed values

{ViewIdeali}Pi in C

Our protocol is secure if the leaked values contains no more info than allowed values

Real world (leaked values) vs. Ideal world (allowed values)

{x3, y3, r3, protocol transcript}

The Real World

y1y2

y4

{x3, y3}

y1y2

y4

The Ideal World

>> If leaked values can be efficiently computed from allowed values.

>> Such an algorithm is called SIMULATOR (simulates the view of the adversary in the real protocol).

>> It is enough if SIM creates a view of the adversary is “close enough” to the real view so that adv. can not distinguish from its real view.

Definition1: View indistinguishability of Adversary in Real and Ideal world

{ViewReali}Pi in C

The Real World

y1y2

y4

{x3, y3}

y1y2

y4

The Ideal World

SIM

Interaction on behalf of the honest parties

SIM: Ideal Adversary

{ViewIdeali}Pi in C

Random Variable/distribution (over the random coins of parties)

Random Variable/distribution (over the random coins of SIM and adv)

{x3, y3, r3, protocol transcript}

Definition 2: Indistinguishability of Joint Distributions of Output and View

>> Joint distribution of output & view of the honest & corrupted parties in both the worlds can not be told apart

OutputReali : The output of Pi on input

(x1,….xn) when Pi is honest.

ViewReali : As defined before when Pi is

corrupted.

The Real World

[ {ViewReali}Pi in C , {OutputReal

i}Pi in H ]

The Ideal World

OutputIdeali : The output of Pi on input

(x1,….xn) when Pi is honest.

ViewIdeali : As defined before when Pi is

corrupted.

[ {ViewIdeali}Pi in C , {OutputIdeal

i}Pi in H ]

>> First note that this def. subsumes the previous definition (stronger)

>> Captures randomized functions as well

Randomized Function and Definition 1

The Real WorldThe Ideal World

>> Is this protocol secure?

The proof says the protocol is secure!

f( , ) = (r , ) r is a random bit

r .

. .

Sample r randomly

. .

Sample r randomly and output

r

No!

{ViewReali}Pi in C

SIMInteraction on behalf of the honest party

Sample and send a random r’

{ViewIdeali}Pi in C

r : r is random r’ : r’ random

Randomized Function and Definition 2

The Real WorldThe Ideal World

>> Is this protocol secure?

The proof says the protocol is insecure!

f( , ) = (r , ) r is a random bit

r .

. .

Sample r randomly

. .

Sample r randomly and output

r

No!

{ViewReali}Pi in C

SIM

{ViewIdeali}Pi in C

[ {ViewReali}Pi in C , {OutputReal

i}Pi in H ][ {ViewIdeali}Pi in C , {OutputIdeal

i}Pi in H ]

[{r’ , r} | r,r’ random and independent] [{r , r} | r random]

Interaction on behalf of the honest party

Sample and send a random r’

Definition 1 is Enough!

{{ViewReali}Pi in C }{x1,.,xn ,

k}

{ {ViewIdeali}Pi in C }{x1,.,xn ,

k}

{{ViewReal

i}Pi in C ,{OutputReali}Pi in H } {x1,..xn , k}{{ViewIdeal

i}Pi in C ,{OutputIdeali}Pi in H }{x1,.,xn , k}

>> For deterministic Functions:

> View of the adversary and output are NOT co-related. > We can separately consider the distributions> Output of the honest parties are fixed for inputs

>> For randomized functions:

> We can view it as a deterministic function where the parties input randomness (apart from usual inputs).

Compute f((x1,r1), (x2,r2)) to compute g(x1,x2;r) where r1+r2 can act as r.

Making Indistinguishability Precise

Notations:o Security parameter k (natural number)o We wish security to hold for all inputs of all lengths, as long as k is

large enough

Definition (Function is negligible): If for every polynomial p() there exists an N such that for all k > N we have (k) < 1/p(k)

Definition (Probability ensemble X={X(a,k)}): o Infinite series, indexed by a string a and natural ko Each X(a,k) is a random variable

In our context: o X(x1,.,xn , k) = { {ViewReal

i}Pi in C }{x1,.,xn , k} (Probability space: randomness of parties)

o Y(x1,.,xn , k) = { {ViewIdeali}Pi in C }{x1,.,xn , k} (Probability space: randomness

of the corrupt parties and simulator)

Computational Indistinguishability

o X(x1,.,xn , k) = { {ViewReali}Pi in C }{x1,.,xn , k} (Probability space: randomness

of parties)o Y(x1,.,xn , k) = { {ViewIdeal

i}Pi in C }{x1,.,xn , k} (Probability space: randomness of the corrupt parties and simulator)

Definition (Computational indistinguishability of X = {X(a,k)} c Y =

{Y(a,k)})

For every polynomial-time distinguisher* D there exists a negligible function such that for every a and all large enough k’s:

|Pr[D(X(a,k) = 1 ] - Pr[D(Y(a,k) = 1 ]| < (k)

For our case a: (x1,.,xn)

Alternative def.AdvD (X,Y) = The prob of D guessing the correct distribution|AdvD (X,Y)| < ½ + (k)

distinguisher D = Real Adv A

Statistical Indistinguishability

o X(x1,.,xn , k) = { {ViewReali}Pi in C }{x1,.,xn , k} (Probability space: randomness

of parties)o Y(x1,.,xn , k) = { {ViewIdeal

i}Pi in C }{x1,.,xn , k} (Probability space: randomness of the corrupt parties and simulator)

Definition (Statistical indistinguishability of X = {X(a,k)} s Y = {Y(a,k)})

For every* distinguisher D there exists a negligible function such that for every a and all large enough k’s:

|Pr[D(X(a,k) = 1 ] - Pr[D(Y(a,k) = 1 ]| < (k)

For our case a: (x1,.,xn)

Alternative def.AdvD (X,Y) = The prob of D guessing the correct distribution|AdvD (X,Y)| < ½ + (k)

May have unbounded power

Perfect Indistinguishability

o X(x1,.,xn , k) = { {ViewReali}Pi in C }{x1,.,xn , k} (Probability space: randomness

of parties)o Y(x1,.,xn , k) = { {ViewIdeal

i}Pi in C }{x1,.,xn , k} (Probability space: randomness of the corrupt parties and simulator)

Definition (Perfect indistinguishability of X = {X(a,k)} P Y = {Y(a,k)})

For every* distinguisher D such that for every a and for all k:

|Pr[D(X(a,k) = 1 ] - Pr[D(Y(a,k) = 1 ]| = 0

For our case a: (x1,.,xn)

Alternative def.AdvD (X,Y) = The prob of D guessing the correct distribution|AdvD (X,Y)| = ½

May have unbounded power

Definition Applies for

Dimension 2 (Networks)

Complete

Synchronous

Dimension 3 (Distrust)

Centralized

Dimension 4 (Adversary)

Threshold/non-threshold

Polynomially Bounded and unbounded powerful

Semi-honest

Static

What is so great about the definition paradigm?

>> One definition for all

> Sum: (x1 + x2 + … + xn) = f(x1, x2, … , xn)

> OT: (- , xb) = f((x1, x2 ), b)

> BA: (y , y, …,y) = f(x1, x2, … , xn): y = majority(x1, x2, … ,

xn)/default value

>> Easy to tweak the ideal world and weaken/strengthen securityReal world protocol achieves whatever ideal world achieves

>> Coming up with the right ideal world is tricky and requires skill

> Will have fun with it in malicious world!

Information Theoretic MPC with Semi-honest Adversary and honest majority [BGW88]

Dimension 2 (Networks)

Complete

Synchronous

Dimension 3 (Distrust)

Centralized

Dimension 4 (Adversary)

Threshold (t)

Unbounded powerful

Semi-honest

Static

Dimension 1 (Models of Computation)

Arithmetic

Michael Ben-Or, Shafi Goldwasser, Avi Wigderson:Completeness Theorems for Non-Cryptographic Fault-Tolerant Distributed Computation (Extended Abstract). STOC 1988.

(n, t) - Secret Sharing Scheme [Shamir 1979, Blackley 1979]

Secret s Dealer

(n, t) - Secret Sharing Scheme [Shamir 1979, Blackley 1979]

Secret s Dealer

v1 v2 v3 vn

Sharing Phase

(n, t) - Secret Sharing Scheme[Shamir 1979, Blackley 1979]

Secret s Dealer

v1 v2 v3 vn

Sharing Phase

Less than t +1 parties have no info’ about the secret

ReconstructionPhase

(n, t) - Secret Sharing Scheme [Shamir 1979, Blackley 1979]

Secret s Dealer

v1 v2 v3 vn

Sharing Phase

t +1 parties can reconstruct the secretSecret s

Reconstruction Phase

Shamir-sharing: (n,t) - Secret Sharing for Semi-honest Adversaries

Secret x is Shamir-Shared if

x2 x3 xnx1 …

Random polynomial of degree t over Fp s.t p>n

P1 P2 PnP3

Reconstruction of Shamir-sharing: (n,t) - Secret Sharing for Semi-honest Adversaries

x2

x3

xn

x1P1

P2

Pn

P3

Pi

The same is done for all Pi

Lagrange’s Interpolation

Shamir-sharing Properties

Property 2: Any t parties have ‘no’ information about the secret. Pr[secret =s before secret sharing] – Pr[secret =s after secret sharing] = 0

Property 1: Any (t+1) parties have ‘complete’ information about the secret.

>> Both proof can be derived from Lagrange’s Interpolation.

Lagrange’s Interpolation

>> Assume that h(x) is a polynomial of degree at most t and C is a subset of Fp of size t+1

>> Poly of degree t

>> At i, it evaluates to 1

>> At any other point, it gives 0.

Theorem: h(x) can be written as

>> Assume for simplicity C = {1,……,t+1}

where

Proof:

Consider LHS – RHS:

Both LHS and RHS evaluates to h(i) for every i in C

Both LHS and RHS has degree at most t

LHS - RHS evaluates to 0 for every i in C and has degree at most t

More zeros (roots) than degree Zero polynomial!

LHS = RHS

Lagrange’s Interpolation

>> Poly of degree t

>> At i, it evaluates to 1

>> At any other point, it gives 0.

Theorem: h(x) can be written as

C = {1,……,t+1}

where

>> are public polynomials

>> are public values, denote by ri

>> Can be written as the linear combination of h(i)s

The combiners are (recombination vector): r1,….rt+1

Property 1: Any (t+1) parties have ‘complete’ information about the secret.

Lagrange’s Interpolation

Property 2: Any t parties have ‘no’ information about the secret. Pr[secret =s before secret sharing] – Pr[secret =s after secret sharing] = 0

Proof: For any secret s from Fp if we sample f(x) of degree at most t randomly s.t. f(0) = s and consider the following distribution for any C that is subset of Fp \ {0} and of size t :

( {f(i)}i in C ) uniform distribution in Fpt

For a fixed s,

If not then two different sets of t+1 values will define the same polynomial

t coefficients from Fpt A unique element from the above

distribution

Lagrange’s Interpolation

Property 2: Any t parties have ‘no’ information about the secret. Pr[secret =s before secret sharing] – Pr[secret =s after secret sharing] = 0

Proof: For any secret s from Fp if we sample f(x) of degree at most t randomly s.t. f(0) = s and consider the following distribution for any C that is subset of Fp \ {0} and of size t :

( {f(i)}i in C ) uniform distribution in Fpt

For a fixed s,

t coefficients from Fpt A unique element from the above

distribution

fs : Fpt Fp

t

Uniform distribution

bijective

Uniform distribution

For every s, uniform dist and independent of dist. of s

Lagrange’s Interpolation

Property 2: Any t parties have ‘no’ information about the secret. Pr[secret =s before secret sharing] – Pr[secret =s after secret sharing] = 0

Proof:

(n,t) Secret Sharing

s : (n,t) Secret Sharing of secret s

For MPC: Linear (n,t) Secret Sharing

s1 s2

c s

s1 s2

c: public constant s

from

from

Linearity: The parties can do the following

Linear

Linearity of (n, t) Shamir Secret Sharing

a1 a2 a3 a

1 2 3

a1 a2

a3 a

b1 b2 b3 b b

b1 b2

b3+ + +

each party does locally

c1 c2 c3

Linearity of (n, t) Shamir Secret Sharing

a1 a2 a3 a

1 2 3

a1 a2

a3 a

b1 b2 b3 b b

b1 b2

b3+ + +

c1 c2 c3 ab

c1 c2

c3

a+b

Addition is Absolutely free

Linearity of (n, t) Shamir Secret Sharing

1 2 3

a1 a2 a3

a1 a2

a3 a

a

c c c

c is a publicly known constant

d1 d2 d3

ca

d1 d2

d3

ca

Multiplication by public constants is Absolutely free

Non-linearity of (n, t) Shamir Secret Sharing

a1 a2 a3

b1 b2 b3 b

a

1 2 3

a1 a2

a3 a

b

b1 b2

b3

d1 d2 d3 ab

d1

d3 d2

ab

Multiplication of shared secrets is not free

Secure Circuit Evaluation

x1 x2 x3 x4

c

y

2 1 5 9

y

3

Secure Circuit Evaluation

Secure Circuit Evaluation

1. (n, t)- secret share each input

2 1 5 9

3

Secure Circuit Evaluation

2 1 5 9

2. Find (n, t)-sharing of each intermediate value

1. (n, t)- secret share each input

3

Secure Circuit Evaluation

3

2 1 5 9

3

48

144

45

2. Find (n, t)-sharing of each intermediate value

1. (n, t)- secret share each input

Secure Circuit Evaluation

2 1 5 9

3

48

Linear gates: Linearity of Shamir Sharing - Non-Interactive

144

45 3

2. Find (n, t)-sharing of each intermediate value

1. (n, t)- secret share each input

Secure Circuit Evaluation

2 1 5 9

3

48 Non-linear gate: Require degree-reduction Technique. Interactive

45

144

3

2. Find (n, t)-sharing of each intermediate value

1. (n, t)- secret share each input

Linear gates: Linearity of Shamir Sharing - Non-Interactive

Secure Circuit Evaluation

2 1 5 9

1. No inputs of the honest parties are leaked.

2. No intermediate value is leaked.

3

48

45

144

Privacy follows (intuitively) because:

3