Introduction to search and optimisation for the design theorist

Post on 21-Mar-2017

45 views 1 download

Transcript of Introduction to search and optimisation for the design theorist

An Introduction to Search and Optimisation

for Design Theorists

Akın Kazakçı

MINES ParisTech PSL Research University, CGS-I3 UMR 9217

akin.kazakci@mines-paristech.fr

Everyone designs who devises courses of action aimed at changing existing situations into preferred ones.

- Herbert Simon

Everyone designs who devises courses of action aimed at changing existing situations into preferred ones.

- States (situations)

- Actions (transformations)

- Value function

Assumes the following ontology: Can be represented with:

- Vectors: x=(x1,…, xn) ∈ S

- Functions: y = f(x), S→S

- Function, v(x), S→ℝ or S→S°⊂S

Maze

States?

Actions?

Goal: find the exitWhat changes if goal = find shortest path?

Positions on the board (x, y); here discrete values for the sake of the exampleCurrent state: (4,4), Goal state: (7,8)

Go North: (0, +1)Go South: (0, -1)

Go West: (-1, 0)Go East: (+1,0)

= Agent

= Exit

Value?Distance to exit

Maze

= Agent

= Exit

(4,4)

(5,4)Right

Up(5,5)

Right

(5,6)

(6,6)Right

(6,7)

(7,7)

(7,8)

Up

Up

Up

The path(s) to the solution cannot be determined in advance (otherwise why search?)So, how to “explore”?

- States

- Actions

- Value function

Are state variables discreet or continuous?What is the search space size?

Which actions are allowed/forbidden in a given states?What is the cost of applying an action?

Explicit (subset of states) or based on value? Is the value function known analytically?Is it costly to evaluate (call) the value function?

Several important factors to consider when dealing with a search space

Generating the search spaceThe search space is usually not explicitly stated.

That means, it needs to be generated.

Conceptually, this generative process can be represented as a tree.

Given a “Node” (state), we can “expand” it with its “neighbours” (states accessible through actions).

Maze

= Agent

= Exit

(4,4)

(4,5)

Right

Up

= Obstacle

Down

(4,3) (5,4)

Moving left is not allowed at state (4,4)Thus, we cannot transform the state to (3,4)The (average) number of neighbours we can generate in a given state is called (effective) branching factor.

Maze

= Agent

= Exit

(3,4)

Left

Down(3,3)

(4,4)

(5,4)Right

Up(5,5)

Right

(5,6)

(6,6)Right

(6,7)

(7,7)

(7,8)

Up

Up

Up

What if we expand like this?

We loose time (and possibly resources)

Blind search

- Two basic strategies for generating the search tree are breadth-first and deep-first search.

- Neither strategy offers an ideal approach: no guidance of search, very costly in time, memory or both.

Breadth-first exploration Deep-first exploration

Images from Wikipedia

Numbers represent the order in which nodes are visited

(If branching factor is infinite (i.e. continuous state representations), no guarantee of completeness.)

How to guide the search?

• That’s the million dollar question.

• A starting point is to use the “value” function (when it is available) for evaluating the “promise” of a path.

• This kind of search strategy is called heuristic or informed search.

• The archetypical example is Hill-Climbing (greedy) search.

Greedy local search (Hill-Climbing)

function Hill-Climbing(problem)variables: current, neighbour

current⟵ MAKE_NODE(INIT(problem)) loop

neighbour = ARGMAX(VALUE (GET_NEIGHBOURS(current)))

if VALUE(neighbour)<VALUE(current)then return current

current ⟵neighbour

return current

Initialise with starting state

Generate all neighbours and select the max valued one

If not better than current state, return the current state

(if not) change the current state to the best neighbour

Maze

= Agent

= Exit

(4,4)

(5,4)Right

Up(5,5)

Up

(5,6)

(4,5)

Up

Up(4,6)

Up(5,6)

This is not the (optimal) solution

This is a “local” optima

Very much dreaded in all search and optimisation literature

= Barrier

An illustration of local optima

V(x)

distance to exit

states

Assume the red dot is the value of the current state. What’s the next move?

xk xl

All neighbouring states have higher values!Remember we are minimising the distance

global optima

An illustration of local optima

V(x)

distance to exit

states

Assume the red dot is the value of the current state. What’s the next move?

xk xl

Greedy search easily gets stuck within local optima.

global optima

Rastrigin function

Image from MathWorks

You think this is complicated? - What if the number of variables increases?

function Simulated_annealing(problem)variables: current, neighbour, T

current⟵ MAKE_NODE(INIT(problem)) while T > 0

neighbour = PICK_ONE(GET_NEIGHBOURS(current)))

current⟵ neighbour, if probability P (VALUE(neighbour),VALUE(current),T) > RANDOM(0,1)

DECREASE_TEMPERATURE(T)

return current

Initialise with starting state

Pick one neighbour

Move to neighbour with some probability

Decrease system temperature

Simulated annealing

Simulated annealing

Gif from Wikipedia

Demo

http://qiao.github.io/PathFinding.js/visual/

OptimisationGiven a search problem, find the optimum value for v(x).

max v(x), s.t. x ∈ S

“subject to”, x is constrained to take values only in S

Remark: max v(x) = min -v(x)Same algorithms can be used to solve either of them.

OptimisationGiven a search problem, find the optimum value for v(x).

max v(x), s.t. x ∈ S

What changes compared to the previous search setup?

The ideal (desired) state is not known now. If we are lucky, an analytical form of the value function is known.In optimization, it is the value function that is being explored We don’t know which state we want to end up in, but we want it to have the best possible value.

Why is this even a model for design?

What would be the points?

What does the objective function represent?

Consider the following, in the context of a design activity

How would the designer generate the search tree?

What would be a new move?

How would you model Wooten&Ulrich’s notion of feedback?

Bonus question: What kind of “design space”underlies logo design?

Can Lehman&Stanley’s approach be seen as a model of creative behaviour?

Is this different than search & optimisation we have seen?

The effect of adding “one” variable to the “design space”

Questions

• What does the model become, if we assume that the designer can add/remove variables during the course of the design?

• Where would the designer find new variables?

• How can a designer decide which variable to add?

• How would you model this process?

Akın Kazakçı akin.kazakci@mines-paristech.fr