Post on 17-Jan-2016
02/17/10CSCE 769
Optimization
Homayoun ValafarDepartment of Computer Science and Engineering, USC
02/17/10CSCE 769
Optimization and Protein Folding Theoretically, the structure with the minimum total
energy is the structure of interest Total energy is defined by the force-field
The core of Ab Initio protein folding is optimization A robust optimization method is cruicial for successful
protein folding algorithms
ETotal=∑ [wBONDp EBONDw ANGL
p EANGLwDIHEp EDIHEw IMPR
p E IMPRwVDWp EVDWwELEC
p E ELEC]
02/17/10CSCE 769
Optimization Problem• Given an objective (cost) function f(x) find the optimal
point X* such that:
• Optimization is the root of most computational problems• Many different approaches with their unique set of
advantages and disadvantages
{X , X*∈Rn
f X :RnR
f X*≤f X ∀ X
02/17/10CSCE 769
Monte Carlo Easiest to implement Very effective for low dimensional problems Very ineffective for large dimensional problems Algorithm consists of randomly sampling space and
accepting the point X* with the smallest value of objective function
for i=1 to 10000 {X = random(range)If f(X) <= f(X*) then X*=X}
02/17/10CSCE 769
Gradient Descent (Conjugate Gradient, Hill Climbing)
Start with some initial point X0
Calculate Xk+1 from Xk in the following way:
Here is the descent step size parameter needs to be selected carefully. Too small or too large can
have severe consequences.
Calculate f(Xk+1)
Go to step 2.
x k1= x k ±ρ . ∇ f x k
02/17/10CSCE 769
Gradient of A Function• A multi-dimensional
derivative• For an n-dimensional function
produces an n-dimensional vector pointing at the direction of highest increase
• Following the direction of the gradient will increase f(x) optimally
• Following in the opposite direction of the gradient will decrease f(x) optimally
• Example:
f x , y , z = x2 yxyz y 33 xy z
∇ f x , y ,z = 2 xyyz3y zx2xz3y23x z
xy−3 xy2 z
02/17/10CSCE 769
Example of Gradient ascend
f(x) = -x2 - 5; xmax = 0, fmax = 5
xk f'(xk)=-2x xk+1=xk+0.4*f'(xk)1.5 -3 0.30.3 -0.6 0.06
0.06 -0.12 0.0120.012 -0.024 0.0024
-20
-15
-10
-5
0
5
-5 -3 -1 0.3 2 4
02/17/10CSCE 769
Example of an Objective (Cost) Function
-2.5
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
2.5
1 6 11 16 21
X* = (17.04,-1.91)
02/17/10CSCE 769
Gradient Descent
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
1 6 11 16 21
Starting point Final point
Direction of movement indicated by G.D.
02/17/10CSCE 769
Simulated Annealing Metropolis Algorithm
Calculate Xj
Start at Xi
f(Xj) < f(Xi) Xi = Xj
Yes
Accept Xj with
Pr = e
(f X f X )
KTi j( ) ( )
No
Adjust T
?
02/17/10CSCE 769
Pseudo PASCAL code
Initialize(istart, T0, L0);k := 0; i := istart; repeat
for i := 1 to Lk dobegin
Generate(Xj from Xi);if f(j) < f(i) then i := j;
elseif (exp(f(i) -f(j))/Tk) > random[0,1) then i := j
end;k := k+1; Calculate_Control(TK);
end;Calculate_Length(Lk);
until stop criterion
02/17/10CSCE 769
Contribution of Simulated Annealing
Simulated annealing helps to escape from the local minima.
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
1 6 11 16 21
Starting point Global optimal point
Direction of movement indicated by G.D.Direction of movement assisted by S.A.
Transient optimal point
02/17/10CSCE 769
Limited Success with GD