Introduction to Machine Learning...

26
Convex Optimization Prof. Nati Srebro Lecture 18: Course Summary

Transcript of Introduction to Machine Learning...

Page 1: Introduction to Machine Learning 236756ttic.uchicago.edu/~nati/Teaching/TTIC31070/2015/Lecture18.pdfΒ Β· βˆ— 2 πœ– 2 βˆ— 2 πœ–2 2 πœ– A-GD log1/πœ– βˆ— 2 πœ– Lower bound Ξ© log1/πœ–Ξ©

Convex Optimization

Prof. Nati Srebro

Lecture 18:Course Summary

Page 2: Introduction to Machine Learning 236756ttic.uchicago.edu/~nati/Teaching/TTIC31070/2015/Lecture18.pdfΒ Β· βˆ— 2 πœ– 2 βˆ— 2 πœ–2 2 πœ– A-GD log1/πœ– βˆ— 2 πœ– Lower bound Ξ© log1/πœ–Ξ©

About the Courseβ€’ Methods for solving convex optimization problems,

based on oracle access, with guarantees based on their properties

β€’ Understanding different optimization methodsβ€’ Understanding their derivationβ€’ When are they appropriateβ€’ Guarantees (a few proofs, not a core component)

β€’ Working and reasoning about opt problemsβ€’ Standard forms: LP, QP, SDP, etcβ€’ Optimality conditionsβ€’ Dualityβ€’ Using Optimality Conditions and Duality to reason about π‘₯βˆ—

Page 3: Introduction to Machine Learning 236756ttic.uchicago.edu/~nati/Teaching/TTIC31070/2015/Lecture18.pdfΒ Β· βˆ— 2 πœ– 2 βˆ— 2 πœ–2 2 πœ– A-GD log1/πœ– βˆ— 2 πœ– Lower bound Ξ© log1/πœ–Ξ©

Oracles

β€’ 0th,1st,2nd, etc order oracle for function 𝑓:π‘₯ ↦ 𝑓 π‘₯ , 𝛻𝑓 π‘₯ , 𝛻2𝑓(π‘₯)

β€’ Separation oracle for constraint 𝒳

π‘₯ ↦ π‘₯ ∈ 𝒳

𝑔 𝑠. 𝑑.𝒳 βŠ‚ π‘₯β€² 𝑔, π‘₯ βˆ’ π‘₯β€² < 0

β€’ Projection oracle for constraint 𝒳 (w.r.t. β€– β‹… β€–2):

𝑦 β†’ argminπ‘₯βˆˆπ’³

π‘₯ βˆ’ 𝑦 22

β€’ Linear optimization oracle

𝑣 β†’ argminπ‘₯βˆˆπ’³

𝑣, π‘₯

β€’ Prox/projection oracle for function β„Ž w.r.t. β€– β‹… β€–2:

𝑦, πœ† ↦ argminπ‘₯

β„Ž π‘₯ + πœ† π‘₯ βˆ’ 𝑦 22

β€’ Stochastic oracle:π‘₯ ↦ 𝑔 𝑠. 𝑑. 𝔼 𝑔 = 𝛻𝑓(π‘₯)

Page 4: Introduction to Machine Learning 236756ttic.uchicago.edu/~nati/Teaching/TTIC31070/2015/Lecture18.pdfΒ Β· βˆ— 2 πœ– 2 βˆ— 2 πœ–2 2 πœ– A-GD log1/πœ– βˆ— 2 πœ– Lower bound Ξ© log1/πœ–Ξ©

Analysis Assumptions

β€’ 𝑓 is 𝐿-Lipschitz (w.r.t β€– β‹… β€–)𝑓 π‘₯ βˆ’ 𝑓 𝑦 ≀ 𝐿 π‘₯ βˆ’ 𝑦

β€’ 𝑓 is 𝑀-Smooth (w.r.t β€– β‹… β€–)

𝑓 π‘₯ + Ξ”π‘₯ ≀ 𝑓 π‘₯ + 𝛻𝑓 π‘₯ , Ξ”π‘₯ +𝑀

2Ξ”π‘₯ 2

β€’ 𝑓 is πœ†-strongly convex (w.r.t β€– β‹… β€–)

𝑓 π‘₯ + 𝛻𝑓 π‘₯ , Ξ”π‘₯ +πœ‡

2Ξ”π‘₯ 2 ≀ 𝑓 π‘₯ + Ξ”π‘₯

β€’ 𝑓 is self-concordantβˆ€π‘₯βˆ€direction 𝑣 𝑓𝑣

β€²β€²β€² π‘₯ ≀ 2𝑓′′ π‘₯ 3/2

β€’ 𝑓 is quadratic (i.e. 𝑓′′′ = 0)

β€’ β€–π‘₯βˆ—β€– is small (or domain is bounded)

β€’ Initial sub-optimality πœ–0 = 𝑓 π‘₯ 0 βˆ’ 𝑝 is small

Page 5: Introduction to Machine Learning 236756ttic.uchicago.edu/~nati/Teaching/TTIC31070/2015/Lecture18.pdfΒ Β· βˆ— 2 πœ– 2 βˆ— 2 πœ–2 2 πœ– A-GD log1/πœ– βˆ— 2 πœ– Lower bound Ξ© log1/πœ–Ξ©

Overall runtime

( runtime of each iteration ) Γ— (number of required iterations)

( oracle runtime ) + ( other operations )

Page 6: Introduction to Machine Learning 236756ttic.uchicago.edu/~nati/Teaching/TTIC31070/2015/Lecture18.pdfΒ Β· βˆ— 2 πœ– 2 βˆ— 2 πœ–2 2 πœ– A-GD log1/πœ– βˆ— 2 πœ– Lower bound Ξ© log1/πœ–Ξ©

Contrast to Non-Oracle Methods

β€’ Symbolically solving 𝛻𝑓(π‘₯) = 0

β€’ Gaussian elimination

β€’ Simplex method for LPs

Page 7: Introduction to Machine Learning 236756ttic.uchicago.edu/~nati/Teaching/TTIC31070/2015/Lecture18.pdfΒ Β· βˆ— 2 πœ– 2 βˆ— 2 πœ–2 2 πœ– A-GD log1/πœ– βˆ— 2 πœ– Lower bound Ξ© log1/πœ–Ξ©

Advantages of Oracle Approach

β€’ Handle generic functions (not only of specific parametric form)

β€’ Makes it clear what functions can be handled, what we need to be able to compute about them

β€’ Complexity in terms of β€œoracle accesses”

β€’ Can obtain lower bound on number of oracle accesses required

Page 8: Introduction to Machine Learning 236756ttic.uchicago.edu/~nati/Teaching/TTIC31070/2015/Lecture18.pdfΒ Β· βˆ— 2 πœ– 2 βˆ— 2 πœ–2 2 πœ– A-GD log1/πœ– βˆ— 2 πœ– Lower bound Ξ© log1/πœ–Ξ©

Unconstrained Optimizationwith 1st Oder Oracleβ€”

Dimension Independent Guarantees

𝝁 β‰Ό π›πŸ β‰Ό 𝑴 π›πŸ β‰Ό 𝑴 𝛁 ≀ 𝑳𝛁 ≀ 𝑳𝝁 β‰Ό π›πŸ

GD𝑀

πœ‡log 1/πœ–

𝑀 π‘₯βˆ— 2

πœ–

𝐿2 π‘₯βˆ— 2

πœ–2𝐿2

πœ‡πœ–

A-GD π‘€πœ‡ log 1/πœ–

𝑀 π‘₯βˆ— 2

πœ–

Lower bound Ξ© π‘€πœ‡ log 1/πœ– Ξ©

𝑀 π‘₯βˆ— 2

πœ–Ξ©

𝐿2 π‘₯βˆ— 2

πœ–2Ξ©

𝐿2

πœ‡πœ–

Theorem: for any method that uses only 1st order oracle access, there exists a

𝐿-Lipschitz function with π‘₯βˆ— ≀ 1 s.t. at least Ξ© 𝐿2

πœ–2 oracle accesses are

required for the method to find an πœ–-suboptimal solution

YuriNesterov

Page 9: Introduction to Machine Learning 236756ttic.uchicago.edu/~nati/Teaching/TTIC31070/2015/Lecture18.pdfΒ Β· βˆ— 2 πœ– 2 βˆ— 2 πœ–2 2 πœ– A-GD log1/πœ– βˆ— 2 πœ– Lower bound Ξ© log1/πœ–Ξ©

Unconstrained Optimizationwith 1st Oder Oracleβ€”

Low Dimensional Guarantees

#π’Šπ’•π’†π’“Runtime per

iterOverallruntime

Center of Mass 𝑛 log

1

πœ–βˆ— βˆ—

Ellipsoid 𝑛2 log1

πœ–π‘›2 𝑛4 log

1

πœ–

Lower Bound Ξ© 𝑛 log1

πœ–

Page 10: Introduction to Machine Learning 236756ttic.uchicago.edu/~nati/Teaching/TTIC31070/2015/Lecture18.pdfΒ Β· βˆ— 2 πœ– 2 βˆ— 2 πœ–2 2 πœ– A-GD log1/πœ– βˆ— 2 πœ– Lower bound Ξ© log1/πœ–Ξ©

Overall runtime

( runtime of each iteration ) Γ— (number of required iterations)

Page 11: Introduction to Machine Learning 236756ttic.uchicago.edu/~nati/Teaching/TTIC31070/2015/Lecture18.pdfΒ Β· βˆ— 2 πœ– 2 βˆ— 2 πœ–2 2 πœ– A-GD log1/πœ– βˆ— 2 πœ– Lower bound Ξ© log1/πœ–Ξ©

Unconstrained Optimization-Practical Methods

Memory Per iter#iter

𝝁 β‰Ό π›πŸ β‰Ό 𝑴#iter

quadratic

GD 𝑢 𝒏 𝛁𝒇 + 𝑢(𝒏) 𝜿 π₯𝐨𝐠 𝟏/𝝐 𝜿 π₯𝐨𝐠 𝟏/𝝐

A-GD 𝑂(𝑛) 𝛻𝑓 + 𝑂(𝑛) 𝜿 π₯𝐨𝐠 𝟏/𝝐 πœ… log 1/πœ–

Momentum 𝑂 𝑛 𝛻𝑓 + 𝑂(𝑛)

Conj-GD 𝑂 𝑛 𝛻𝑓 + 𝑂(𝑛) πœ… log 1/πœ–

L-BFGS 𝑂(π‘Ÿπ‘›) 𝛻𝑓 + 𝑂(π‘Ÿπ‘›)

BFGS 𝑂(𝑛2) 𝛻𝑓 + 𝑂(𝑛2) πœ… log 1/πœ–

Newton 𝑢 π’πŸ π›πŸπ’‡ + 𝑢(π’πŸ‘)𝒇 𝒙 𝟎 + π’‘βˆ—

𝜸+ π₯𝐨𝐠 π₯𝐨𝐠𝟏/𝝐 1

Page 12: Introduction to Machine Learning 236756ttic.uchicago.edu/~nati/Teaching/TTIC31070/2015/Lecture18.pdfΒ Β· βˆ— 2 πœ– 2 βˆ— 2 πœ–2 2 πœ– A-GD log1/πœ– βˆ— 2 πœ– Lower bound Ξ© log1/πœ–Ξ©

Constrained Optimizationminπ‘₯βˆˆπ’³

𝑓(π‘₯)

β€’ Projected Gradient Descent

β€’ Oracle: 𝑦 ↦ argminπ‘₯βˆˆπ’³

π‘₯ βˆ’ 𝑦 22

β€’ 𝑂(𝑛) + projection per iteration

β€’ Similar iteration complexity to Grad Descent:poly dependence on 𝝐 or on 𝜿

β€’ Conditional Gradient Descent:

β€’ Oracle: 𝑣 β†’ argminπ‘₯βˆˆπ’³

𝑣, π‘₯

β€’ 𝑂(𝑛) + linear optimization per iteration

β€’ 𝑂( π‘€πœ–) iterations, but no acceleration, no 𝑂(πœ… log 1 πœ–)

β€’ Ellipsoid Method

β€’ Separation oracle (can handle much more generic 𝒳)

β€’ 𝑂(𝑛2) + separation oracle per itration

β€’ 𝑂(𝑛2 log 1 πœ–) iterations total runtime 𝑂(𝑛4 log 1 πœ–)

Page 13: Introduction to Machine Learning 236756ttic.uchicago.edu/~nati/Teaching/TTIC31070/2015/Lecture18.pdfΒ Β· βˆ— 2 πœ– 2 βˆ— 2 πœ–2 2 πœ– A-GD log1/πœ– βˆ— 2 πœ– Lower bound Ξ© log1/πœ–Ξ©

Constrained Optimization

β€’ Interior Point Methods

β€’ Only 1st and 2nd oracle access to 𝑓0, 𝑓𝑖‒ Overall #Newton Iterations: 𝑂 π‘š (log 1 πœ– + log log 1 𝛿)

β€’ Overall runtime: β‰ˆ 𝑂 π‘š 𝑛 + 𝑝 3 +π‘š 𝛻2 evals log 1 πœ–

β€’ Can also handle matrix inequalities (SDPs)

β€’ Other cones: need self concordant barrier

β€’ β€œStandard Method” for LPs, QPs, SDPs, etc

minπ‘₯βˆˆβ„π‘›

𝑓0(π‘₯)

𝑠. 𝑑. 𝑓𝑖 π‘₯ ≀ 0𝐴π‘₯ = 𝑏

minπ‘₯βˆˆβ„π‘›

𝑓0(π‘₯)

𝑠. 𝑑. 𝑓𝑖 π‘₯ β‰Ό 0𝐴π‘₯ = 𝑏

minπ‘₯βˆˆβ„π‘›

𝑓0(π‘₯)

𝑠. 𝑑. βˆ’π‘“π‘– π‘₯ ∈ 𝐾𝑖

𝐴π‘₯ = 𝑏

Page 14: Introduction to Machine Learning 236756ttic.uchicago.edu/~nati/Teaching/TTIC31070/2015/Lecture18.pdfΒ Β· βˆ— 2 πœ– 2 βˆ— 2 πœ–2 2 πœ– A-GD log1/πœ– βˆ— 2 πœ– Lower bound Ξ© log1/πœ–Ξ©

Barrier Methods

β€’ Log barriers 𝐼𝑑 𝑒 = βˆ’1

𝑑log(βˆ’π‘’):

β€’ Analysis and Guarantees

β€’ β€œCentral Path”

β€’ Relaxation of KKT conditions

minπ‘₯βˆˆβ„π‘›

𝑓0 π‘₯ + 𝑖=1π‘š 𝐼 𝑓𝑖 π‘₯

𝑠. 𝑑. 𝐴π‘₯ = 𝑏

0

∞

0barrier function penalty function

π‘₯βˆ—

𝑓𝑖 π‘₯ ≀ 0𝐴π‘₯ = 𝑏

Page 15: Introduction to Machine Learning 236756ttic.uchicago.edu/~nati/Teaching/TTIC31070/2015/Lecture18.pdfΒ Β· βˆ— 2 πœ– 2 βˆ— 2 πœ–2 2 πœ– A-GD log1/πœ– βˆ— 2 πœ– Lower bound Ξ© log1/πœ–Ξ©

Optimality Condition(assuming Strong Duality)

π‘₯βˆ— optimal for (P)

πœ†βˆ—, πœˆβˆ— optimal for (D)

𝑓𝑖 π‘₯βˆ— ≀ 0 βˆ€π‘–=1β€¦π‘š

β„Žπ‘— π‘₯βˆ— = 0 βˆ€π‘—=1…𝑝

πœ†π‘–βˆ— β‰₯ 0 βˆ€π‘–=1..π‘š

𝛻π‘₯𝐿 π‘₯βˆ—, πœ†βˆ—, πœˆβˆ— = 0

πœ†π‘–βˆ—π‘“π‘– π‘₯βˆ— = 0 βˆ€π‘–=1..π‘š

KKT Conditions

AlbertTucker

HaroldKuhn

WilliamKarush

Page 16: Introduction to Machine Learning 236756ttic.uchicago.edu/~nati/Teaching/TTIC31070/2015/Lecture18.pdfΒ Β· βˆ— 2 πœ– 2 βˆ— 2 πœ–2 2 πœ– A-GD log1/πœ– βˆ— 2 πœ– Lower bound Ξ© log1/πœ–Ξ©

Optimality Conditionfor Problem with Log Barrier

π‘₯βˆ— optimal for (𝑃𝑑)

πœˆβˆ— optimal for (𝐷𝑑)

𝑓𝑖 π‘₯βˆ— ≀ 0 βˆ€π‘–=1β€¦π‘š

β„Žπ‘— π‘₯βˆ— = 0 βˆ€π‘—=1…𝑝

πœ†π‘–βˆ— β‰₯ 0 βˆ€π‘–=1..π‘š

𝛻π‘₯𝐿 π‘₯βˆ—, πœ†βˆ—, πœˆβˆ— = 0

πœ†π‘–βˆ—π‘“π‘– π‘₯βˆ— = 𝒕 βˆ€π‘–=1..π‘š

KKT Conditions

Page 17: Introduction to Machine Learning 236756ttic.uchicago.edu/~nati/Teaching/TTIC31070/2015/Lecture18.pdfΒ Β· βˆ— 2 πœ– 2 βˆ— 2 πœ–2 2 πœ– A-GD log1/πœ– βˆ— 2 πœ– Lower bound Ξ© log1/πœ–Ξ©

Formulation of LP

1939 1947

Simplex

1984ArkadiNemirovski

1994YuriNesterov

NarendraKarmarkar

19941984

GeorgeDantzig

LeonidKhachiyanLeonid

Kantorovich

1984ArkadiNemirovski

General IP

IP for LP

1972 1979

EllipsoidMethod

Ellipsoid isPoly time

for LP

HerbertRobbins

1951

SGD (w/ Monro)

Non-SmoothSGD

+ analysis

1978

MargueriteFrank

PhilipWolfe

Conditional GD

1956

KKT

Page 18: Introduction to Machine Learning 236756ttic.uchicago.edu/~nati/Teaching/TTIC31070/2015/Lecture18.pdfΒ Β· βˆ— 2 πœ– 2 βˆ— 2 πœ–2 2 πœ– A-GD log1/πœ– βˆ— 2 πœ– Lower bound Ξ© log1/πœ–Ξ©

From Interior Point MethodsBack to First Order Methods

β€’ Interior Point Methods (and before them Ellipsoid):

β€’ 𝑂 π‘π‘œπ‘™π‘¦ 𝑛 log1

πœ–

β€’ LP in time polynomial in size of input (number of bits in coefficients)

β€’ But in large scale applications, 𝑂(𝑛3.5) too high

β€’ First order (and stochastic) methods:

β€’ Better dependence on 𝑛

β€’ π‘π‘œπ‘™π‘¦1

πœ–(or π‘π‘œπ‘™π‘¦(πœ…))

Page 19: Introduction to Machine Learning 236756ttic.uchicago.edu/~nati/Teaching/TTIC31070/2015/Lecture18.pdfΒ Β· βˆ— 2 πœ– 2 βˆ— 2 πœ–2 2 πœ– A-GD log1/πœ– βˆ— 2 πœ– Lower bound Ξ© log1/πœ–Ξ©

Overall runtime

( runtime of each iteration ) Γ— (number of required iterations)

( oracle runtime ) + ( other operations )

Page 20: Introduction to Machine Learning 236756ttic.uchicago.edu/~nati/Teaching/TTIC31070/2015/Lecture18.pdfΒ Β· βˆ— 2 πœ– 2 βˆ— 2 πœ–2 2 πœ– A-GD log1/πœ– βˆ— 2 πœ– Lower bound Ξ© log1/πœ–Ξ©

Example: β„“1 regression

𝑓 π‘₯ =

𝑖=1

π‘š

π‘Žπ‘– , π‘₯ βˆ’ 𝑏𝑖

β€’ Option 1: Cast as constrained optimization

minπ‘₯βˆˆβ„π‘›,π‘‘βˆˆβ„π‘š

𝑖

𝑑𝑖 𝑠. 𝑑. βˆ’π‘‘π‘– ≀ π‘Žπ‘– , π‘₯ βˆ’ 𝑏𝑖 ≀ 𝑑

β€’ Runtime: 𝑂( π‘š 𝑛 +π‘š 3 log 1 πœ–)

β€’ Options 2: Gradient Descent

β€’ π‘‚π‘Ž 2 π‘₯βˆ— 2

πœ–2iterations

β€’ 𝑂(π‘›π‘š) per iteration overall 𝑂 π‘›π‘š1

πœ–2

β€’ Option 3: Stochastic Gradient Descent

β€’ π‘‚π‘Ž 2 π‘₯βˆ— 2

πœ–2iterations

β€’ 𝑂(𝑛) per iteration overall 𝑂 𝑛1

πœ–2

Page 21: Introduction to Machine Learning 236756ttic.uchicago.edu/~nati/Teaching/TTIC31070/2015/Lecture18.pdfΒ Β· βˆ— 2 πœ– 2 βˆ— 2 πœ–2 2 πœ– A-GD log1/πœ– βˆ— 2 πœ– Lower bound Ξ© log1/πœ–Ξ©

Remember!

β€’ Gradient is in dual space---don’t take mapping for granted!

ℝ𝑛 ℝ𝑛 βˆ—π‘₯ 𝛻𝑓𝛻Ψ

π›»Ξ¨βˆ’1

Page 22: Introduction to Machine Learning 236756ttic.uchicago.edu/~nati/Teaching/TTIC31070/2015/Lecture18.pdfΒ Β· βˆ— 2 πœ– 2 βˆ— 2 πœ–2 2 πœ– A-GD log1/πœ– βˆ— 2 πœ– Lower bound Ξ© log1/πœ–Ξ©

Some things we didn’t coverβ€’ Technology for unconstrained optimization

β€’ Different quasi-Newton and conj gradient methods

β€’ Different line searches

β€’ Technology for Interior Point Primal-Dual methods

β€’ Numerical Issues

β€’ Recent advances in first order and stochastic methods

β€’ Mirror Descent, Different Geometries, Adaptive Geometries

β€’ Decomposable functions, partial linearization and acceleration

β€’ Faster methods for problems with specific forms or properties

β€’ Message passing for solving LPs

β€’ Flows and network problems

β€’ Distributed Optimization

Page 23: Introduction to Machine Learning 236756ttic.uchicago.edu/~nati/Teaching/TTIC31070/2015/Lecture18.pdfΒ Β· βˆ— 2 πœ– 2 βˆ— 2 πœ–2 2 πœ– A-GD log1/πœ– βˆ— 2 πœ– Lower bound Ξ© log1/πœ–Ξ©

Current Trends

β€’ Small scale problems: β€’ Super-efficient and reliable solvers for use in real-time

β€’ Very large scale problems:β€’ Stochastic first order methods

β€’ Linear time, one-pass or few-pass

β€’ Relationship to Machine Learning

Page 24: Introduction to Machine Learning 236756ttic.uchicago.edu/~nati/Teaching/TTIC31070/2015/Lecture18.pdfΒ Β· βˆ— 2 πœ– 2 βˆ— 2 πœ–2 2 πœ– A-GD log1/πœ– βˆ— 2 πœ– Lower bound Ξ© log1/πœ–Ξ©

Other Courses

β€’ Winter 2016:

Non-Linear Optimization (Mihai Anitescu, Stats)

β€’ Winter 2016:

Networks I: Introduction to Modeling and Analysis (Ozan Candogan, Booth)

β€’ 2016/17:

Computational and Statistical Learning Theory (TTIC)

Page 25: Introduction to Machine Learning 236756ttic.uchicago.edu/~nati/Teaching/TTIC31070/2015/Lecture18.pdfΒ Β· βˆ— 2 πœ– 2 βˆ— 2 πœ–2 2 πœ– A-GD log1/πœ– βˆ— 2 πœ– Lower bound Ξ© log1/πœ–Ξ©

About the Courseβ€’ Methods for solving convex optimization problems,

based on oracle access, with guarantees based on their properties

β€’ Understanding different optimization methodsβ€’ Understanding their derivationβ€’ When are they appropriateβ€’ Guarantees (a few proofs, not a core component)

β€’ Working and reasoning about opt problemsβ€’ Standard forms: LP, QP, SDP, etcβ€’ Optimality conditionsβ€’ Dualityβ€’ Using Optimality Conditions and Duality to reason about π‘₯βˆ—

Page 26: Introduction to Machine Learning 236756ttic.uchicago.edu/~nati/Teaching/TTIC31070/2015/Lecture18.pdfΒ Β· βˆ— 2 πœ– 2 βˆ— 2 πœ–2 2 πœ– A-GD log1/πœ– βˆ— 2 πœ– Lower bound Ξ© log1/πœ–Ξ©

Final

β€’ Tuesday 10:30am

β€’ Straight-forward questions

β€’ Allowed: anything hand-written by you (not photo-copied)

β€’ In depth: Up to and including Interior Point Methods

β€’ Unconstrained Optimization

β€’ Duality

β€’ Optimality Conditions

β€’ Interior Point Methods

β€’ Phase I Methods and Feasibility Problems

β€’ Superficially (a few multiple choice or similar)

β€’ Other first order and stochastic methods

β€’ Lower Bounds

β€’ Center of Mass / Ellipsoid

β€’ Other approaches