Testing COP-Solvers with a Hyper-Redundant Manipulator Model

25
Testing COP-Solvers with a Hyper-Redundant Manipulator Model University of Kassel Project Report by Jens Schreiber At Distributed Systems Reviewer: Prof. Dr. Kurt Geihs Supervisor: Dipl.-Inf. Hendrik Skubch July 30, 2012

Transcript of Testing COP-Solvers with a Hyper-Redundant Manipulator Model

Page 1: Testing COP-Solvers with a Hyper-Redundant Manipulator Model

Testing COP-Solvers with a Hyper-RedundantManipulator Model

University of Kassel

Project Report by

Jens Schreiber

At

Distributed Systems

Reviewer: Prof. Dr. Kurt Geihs

Supervisor: Dipl.-Inf. Hendrik Skubch

July 30, 2012

Page 2: Testing COP-Solvers with a Hyper-Redundant Manipulator Model

Abstract

Optimization is present in fields such as Physics, Statistics, Data-Mining and Robotics. The

optimization of continuous non-linear problems is here motivated by the field of Robotics, more

precisely by IMPERA, – Integrated Mission Planning for Distributed Systems and the Middle

Size League (MSL) of the RoboCup initiative. In this paper, we compare different solvers such

as ALICA-Solver [7], Covariance Matrix Adaption Evolution Strategy (CMAES)[2], Sequential

Quadratic Programming (SQP)[5] and two Hybrid-Solvers with a Hyper–redundant manipulator

model [5]. We are especially interested in the comparison of ALICA-Solver to other solvers.

Therefore we model a robot arm, which tries to grab an object and avoids obstacles as a typical

task in the robotic domain. We show that ALICA-Solver is precise enough and as good as CMAES

with respect to the median error. Furthermore, we can summarize that the ALICA-Solver has the

fastest execution time and smallest amount of function evaluations in this benchmark problem.

Page 3: Testing COP-Solvers with a Hyper-Redundant Manipulator Model

Contents

1 Introduction 1

2 Problem Definition 3

3 Foundation 6

3.1 Constraint Optimization Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 6

3.2 Sequential Quadratic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . 7

3.3 Covariance Matrix Adaption Evolution Strategy . . . . . . . . . . . . . . . . . . 7

3.4 Hybrid-Solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

3.5 ALICA-Solver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

4 Implementation 11

4.1 Testmodel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

4.2 CFSQP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

4.3 ESKit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

4.4 Hybrid-Solver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

4.5 ALICA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

5 Evaluation 15

6 Summary 20

III

Page 4: Testing COP-Solvers with a Hyper-Redundant Manipulator Model

1 Introduction

Optimization is present in fields such as Physics, Statistics, Data-Mining and Robotics. The

optimization of continuous non-linear problems is here motivated by the field of Robotics, more

precisely by IMPERA, – Integrated Mission Planning for Distributed Systems and the Middle

Size League (MSL) of the RoboCup initiative. In this paper, we compare different solvers such

as ALICA–Solver [7], Covariance Matrix Adaption Evolution Strategy (CMAES)[2], Sequential

Quadratic Programming (SQP)[5] and two Hybrid–Solvers with a Hyper–redundant manipulator

model [5]. We are especially interested in the comparison of ALICA–Solver to other solvers.

The IMPERA project investigates robotic teams that are exploring an unknown lunar environ-

ment with mobile robots. Therefore, software is developed which is able to solve distributed and

automated tasks. An example of such an exploration can be seen in Figure 1.1a. It shows a

mobile robot exploring the lunar test environment at the DFKI-Bremen1.

RoboCup is an international initiative with the goal to develop intelligent robots through com-

petitions. The original goal was to win against the actual human world champion in football in

the year 2050. Figure 1.1b shows the cyan robots of the Carpe Noctem2 (CN) MSL team.

Both projects have in common that the robots need to react according to their environment.

Reacting to their environment is mainly done by observing their environment with a camera,

1robotik.dfki-bremen.de2www.das-lab.net

(a) Lunar Environment (b) RoboCup

Figure 1.1: IMPERA and RoboCup Sample

1

Page 5: Testing COP-Solvers with a Hyper-Redundant Manipulator Model

1 Introduction

extracting features of the real world, communicating with the team and making decision according

to the actual environment. The robots are autonomous, hence they need to make their own

decision such as planning a path or cooperating with team members.

Furthermore both projects use the behaviour modelling language ALICA – A language for In-

teractive Cooperative Agents [8]. One part of the behaviour modelling language ALICA is the

ALICA-Solver (Rprop(∑∧,max)), which can be used to solve the previously mentioned contin-

uous non-linear optimization problems. In this paper, we evaluate this solver with a hyper-

redundant manipulator model [5] in respect of time and precision. Additionally, we compare the

performance of Rprop(Σ∧,max) with two hybrid solvers [5], one SQP solver [10] and CMAES [2].

We define a hyper-redundant manipulator model in Chapter 2 with the Denavit-Hartenberg-

Convention. Also, we briefly describe the different solvers employed in Chapter 3. Chapter

4 explains the implementation details of the different algorithms. In the last two chapters we

evaluate and summarize this work.

2

Page 6: Testing COP-Solvers with a Hyper-Redundant Manipulator Model

2 Problem Definition

The goal of the continuous non-linear optimization problem is to move an robot arm to a specific

position, avoid obstacles and satisfy physical constraints as well. This inverse kinematics problem

occurs for instance when a robot grabs a stone.

The modelling of such a robot arm is done using the Denavit-Hartenberg-Model (DHM). DHM

has been used in Shang et al. [5] for describing positions of links of a robot arm. The model is

mainly used for describing forwards and backwards kinematic. We model a robot arm as a chain

of i links. Each of these links has the parameters αi, ai, θi and di of the Denavit-Hartenberg

convention.

Figure 2.1: Denavit-Hartenberg-Diagram [9]

Figure 2.1 shows the relation of two links with the DHM parameters. Each of the links has it

own coordinate system. The z axis always points towards the axis of rotation. For the first link

3

Page 7: Testing COP-Solvers with a Hyper-Redundant Manipulator Model

2 Problem Definition

the x and y axes the direction is freely selectable. Later, the x axis is derived by the common

normal of the axes zn and zn−1 and the y axis is selected in respect to x and z. The parameter d

is the depth between xn and xn−1, θ is the angle between xn and xn−1, a is the length between

zn and zn−1 and α is the angle between zn and zn−1.

Taking these parameters into account, Formula 2.1 defines the rotation and translation matrix to

calculate global coordinates of link i [5]. The global position gi of link i can be calculated from

the base point p = [0, 0, 0, 1]T with gi = p T 1 . . . T i−1.

T =

cos θi − sin θi 0 ri−1

sin θi cosαi cos θi cosαi − sinαi − sinαidi

sin θi sinαi cos θi sinαi − cosαi − cosαidi

0 0 0 1

(2.1)

The objective function and the constraints are defined according to these global positions. The

objective function is an equation which a solver tries to minimize or maximize. Often, the

objective function is also called cost (minimization) or utility (maximization) function. The

objective function is defined by

f = 2

√(xg − xe)2 + (yg − ye)2 + (zg − ze)2 (2.2)

where [xg, yg, zg, 1]T is the global goal position and [xe, ye, ze, 1]T is the global position gi of the

last link. Formula 2.2 describes a minimization problem of the distance between the last link and

the goal position. Physical limits and avoiding obstacles are examples of constraints. In contrast

to objective functions, constraints are equalities or inequalities which needs to be satisfied, instead

of minimized or maximized.

Figure 2.2 shows an example of such a robot arm described by the DHM. Besides the objective

function the robot arm needs to avoid obstacles such as bottles, chairs and tables. The avoidance

of a spherical obstacle j, at location pj = [xj , yj , zj , 1]T , radius rj and link i with the origin

oi = [xi, yi, zi, 1]T , is given by the inequality

2

√(xi − xj)2 + (yi − yj)2 + (zi − zj)2 ≥ ai + rj . (2.3)

4

Page 8: Testing COP-Solvers with a Hyper-Redundant Manipulator Model

2 Problem Definition

Figure 2.2: Hyper-redundant manipulator modelwith 9 obstacles and 20 links[5, page 439]

5

Page 9: Testing COP-Solvers with a Hyper-Redundant Manipulator Model

3 Foundation

After defining the hyper-redundant manipulator model in Chapter 2, we briefly define the Con-

straint Optimization Problem in Section 3.1. In Section 3.2 we explain the Sequential Quadratic

Programming–Solver (SQP-Solver) based on [10]. The solver in Section 3.3 is called Covariance

Matrix Adaption Evolution Strategy (CMAES) by Hansen and Ostermeier [2]. The third solver

in Section 3.4 is a Hybrid-Solver and uses an SQP-Solver and a static penalty method [5]. Last

but not least the ALICA-Solver by Skubch [7] is introduced in Section 3.5

In a historical comparison, we can see big differences. The SQP-Solver was published in 1970.

In contrast to that ALICA-Solver was presented in 2012. Besides these two the CMAES were

introduced in 1996 and the hybrid solvers were invented in 2006.

3.1 Constraint Optimization Problems

minx∈Rn

f(x) subject to

ci(x) = 0, i ∈ ε

ci(x) ≥ 0, i ∈ I(3.1)

Formula 3.3 defines a constraint minimization problem [3]. In Formula 3.3 I and ε are two

finite sets of indices [3]. This means that a solver tries to minimizes the objective function f(x)

and satisfying equality constraints ci with i ∈ ε and inequality constraints ci with i ∈ I. The

constraints restrict the solution space Ω, defined in Formula 3.2.

Ω = x | ci(x) = 0, i ∈ ε; ci(x) ≥ 0, i ∈ I (3.2)

With this we can rewrite the COP in a more compact manner as

minx ∈ Ω

f(x) (3.3)

6

Page 10: Testing COP-Solvers with a Hyper-Redundant Manipulator Model

3 Foundation

3.2 Sequential Quadratic Programming

The SQP-Solver is an iterative method that transforms the initial problem into a quadratic

subproblem. Solving this subproblem can be done using the Lagrange Multiplier (LP) [10]. The

LP can be used to calculate the descent of a solution in the space of possible solutions Ω.

Generally SQP methods are good for significantly non-linear problems [3]. SQP is an iterative

algorithm that is approximating, in every iteration, the initial problem by quadratic subproblems.

In contrast to highly non-linear problems, the quadratic subproblems are easier to solve. The

solutions of subproblems are used to generate a new iteration. The iterations are generated in

the sense that they converge to a local minima.

The basic idea of the used SQP solver – CFSQP [10] – is similar to fmincon [1] in Matlab used

in [5]. First of all the initial point given by the user is checked if it satisfies all constraints. If

not, CFSQP generates a new a point which satisfies all constraints. To find a feasible solution

CFSQP solves a

standard quadratic program using a positive definite estimate H of the Hessian of the

Lagrangian [10].

The Hessian matrix is a square matrix with second order partial derivatives of functions with

several variables. Typically, the Hessian matrix is approximated by a Quasi-Newton method.

The most popular Quasi-Newton method is Broyden–Fletcher–Goldfarb–Shanno (BFGS) named

after their inventors [3].

We need a positive value of the Hessian of the Lagrangian, because this means that the actual

point is a minimum, similar to the second derivation in differential calculus. In contrast to

fmincon, which uses an active set strategy1, CFSQP uses an Armijo-type line search [10] to find

a solution of the quadratic subproblem.

3.3 Covariance Matrix Adaption Evolution Strategy

Unlike SQP, the Covariance Matrix Adaption Evolution Strategy (CMAES) is not derivation-

based. It is a stochastic evolution strategy (ES). It can be explained as a biological evolution,

in which in each evolution the best individuals are taken to improve the objective function, this

process is called self-adaption.

In classical ES, the objective function is improved by changing individuals (variables) in each

evolution step. The decisions in an evaluation step are usually based on (stochastic) variation.

Unlike traditional ES, Hansen [2] suggests to take a complete evolution path into consideration

1http://www.mathworks.de/help/toolbox/optim/ug/fmincon.html

7

Page 11: Testing COP-Solvers with a Hyper-Redundant Manipulator Model

3 Foundation

for the self-adaption. This evolution path yields too further information on correlation between

mutation steps [2]. This information can be used for step size adaption and makes the adaption

usually faster [2].

Furthermore, Hansen in [2] uses covariance matrices to remove correlations in single mutation

steps of the evolution path. Removing correlation in the random variables yields to the shortest

evolution path. Hansen is using the covariance to calculate a factor of change between random

variables and furthermore he is using the covariance to remove correlations.

3.4 Hybrid-Solvers

The hybrid solver has the purpose to take advantage of different solvers. In [5], Shang introduces

hybrid solvers for COP. Pseudocode 1 shows the algorithm for COP.

Algorithm 1 Hybrid-Solver

1: Run the static penalty method from a starting point xs to minimize sum of constraint viola-tions (Approach (1)) or sum of the objective and constraint violations (Approach (2)), andobtain a solution x′.

2: Run the SQP method from x′. If SQP successfully terminates with a feasible solution, thenstop.

3: Generate a new random point and go to Step 1.

This hybrid solver use a SQP-Solver and a static penalty method (SPM). Generally, we speak

of penalty functions when we replace the original constrained problem by a number of uncon-

strained problems, which yields to the solution of the original problem. The penalty function in

[5] is given by

Q(x, µ) = f(x) +1

m∑i=1

(max(0, ci(x)))2.

The penalty function can be either static for a constant value of the penalty parameter µ or

dynamic when solving a series of function with changing µ.

Furthermore, Shang uses in [5] a Quasi-Newton-Solver fminunc2 from the Matlab toolbox. The

fminunc algorithm is based on the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm named

after their inventors, [3, Chapter 8.1]. In contrast to that, we are using ALGLIB3, which is

implemented with the limited-memory BFGS (L-BFGS) version. In contrast to BFGS, L-BFGS

is taking only the most recent iterations into consideration for the calculation of the Hessian

2http://www.mathworks.de/help/toolbox/optim/ug/fminunc.html3http://www.alglib.net/optimization/lbfgsandcg.php#header2

8

Page 12: Testing COP-Solvers with a Hyper-Redundant Manipulator Model

3 Foundation

matrix instead of all. The more recent iterations are more relevant than older iterations, [3,

Chapter 9], so L-BFGS uses this to save memory.

Nevertheless, Shang in [5] mentions that SPM solver find a solution quite quickly with a small

penalty value even in highly constrained problems. This solution can then be used to obtain an

optimal solution from a SQP solver.

This approach for the hybrid solver is chosen, because of the fact that SQP solvers work best

when their starting point is close to the feasible solution space.

Therefore, we choose the two best approaches from [5], shown in Pseudocode 1. The first Ap-

proach minimizes the constraint conflicts with SPM and then uses the resulting point as a starting

point for the SQP solver. Whereas the second Approach minimizes the objective function and

the constraint violation and uses the solution for the SQP. Besides these two steps, the SPM

solver is always restarted from a random point until the solver finds a feasible solution.

3.5 ALICA-Solver

The ALICA-Solver searches for a feasible solution with an algorithm called resilient backprop-

agation (Rprop). Rprop was developed in 1994 by Martin Riedmiller and Heinrich Braun [4].

Rprop is a supervised batch learning method and unlike the lagrange multiplier, Rprop uses only

the sign of the gradient of the derivation instead of the value. The sign is used to update step

sizes in the search algorithm. Every time there is a change in the sign, this indicates that the

last update was too big and the algorithm jumped over a local optimum [4].

The gradient is calculated by an extended version of Alex Shtof’s AutoDiff [6]. AutoDiff uses the

chain rule to calculate the derivation of function.

Furthermore, the ALICA-Solver transforms the formula θ with the transfer function T into a

real-valued function to obtain a global solution with Rprop [7]. In Formula 3.4 and 3.6 we can

see the transformation and in Formula 3.5 and 3.7 their derivations.

<∗ (a, b) =

1 if a− b < 0

a− b otherwise(3.4)

δ

δχi<∗ (a, b) =

0 if a− b < 0

δδχi

(a− b) otherwise(3.5)

≤∗ (a, b) =

1 if a− b ≤ 0

a− b otherwise(3.6)

9

Page 13: Testing COP-Solvers with a Hyper-Redundant Manipulator Model

3 Foundation

δ

δχi≤∗ (a, b) =

0 if a− b ≤ 0

δδχi

(a− b) otherwise(3.7)

A transfer function can be referred as T (f∧, f∨) with specific f∧ and f∨.

These conditions are dependent of the interpretations. So, if the result is ≤ 0, this is equal to

the boolean false and if the result is ≥ 1 then this means true.

For the choice of f∧, f∨ the implementation of ALICA uses a sum-based approach given in the

Formulas 3.8 and the derivation is given in Formula 3.9. Additionaly, a disjunction is interpreted

as a maximum operator, but this is not used in the evaluated continuous non-linear optimization

problem.

Σ∧(a, b) =

1 if a = 1 ∧ b = 1

max(0, a) + max(0, b) otherwise(3.8)

δ

δχiΣ∧(a, b) =

δ

δχi(a+ b) (3.9)

Important is that this transformation yields too an optimum similar to the transformation of

SQP-Solvers. In comparison to the approach from SQP-Solvers, here only one transformation is

necessary instead of numerous. Furthermore the ALICA-Solver needs only the first derivation,

where SQP-Solvers need the second.

10

Page 14: Testing COP-Solvers with a Hyper-Redundant Manipulator Model

4 Implementation

In Chapter 3, we briefly discussed the background of the used algorithms. In this chapter, we

explain implementation details. Moreover, we provide implementation background of the solvers

in the sections 4.2, 4.3, 4.4 and 4.5. The generation of the DHM test model is given in Section

4.1.

Further we extended an C++ implementation of the transfer function T used in [7] with a eDist

help-function to calculate the Euclidean distance between obstacles and links and between the

goal position and the last link location. Generally the transfer function is used to derive an

optimization problem for ESKit and Rprop(Σ∧,max) further explained in Section 3.5. The same

implementation is used for the SQP-Solver and the Hybrid-Solvers as a data-structure to have

similar evaluation effort. For these solvers the transformation is not necessary and they use the

data structure according to their implementations.

4.1 Testmodel

In Chapter 2, we defined our test model in a mathematical way. Here we will give a short

overview about the structure of the generation of the model given in Pseudocode 2. This shows

the generation of m obstacles and an arm with n RobotArmComponents. Each of the m obstacles

has a position and a radius and for every generated ArmComponent object the center of mass,

the actual θ , θmax and θmin, αi, di and ai are defined or calculated. θmax and θmin are the limits

of the free variable θi. These ArmComponents are used to generate solver specific simulations.

Detailled information about the generation are explained in Pseudocode 2.

4.2 CFSQP

The two most important, so called user-supplied, functions for CFSQP are:

void obj(int nparam,int j,double* x,double* fj,void* cd)

void constr(int nparam,int j,double* x,double* gj,void* cd)

11

Page 15: Testing COP-Solvers with a Hyper-Redundant Manipulator Model

4 Implementation

Algorithm 2 Test Model Generation

1: Generate language specific header;2: Generate language specific MultiplyMatrix function;3: Generate algorithm specific functions;4: Generate free variables;5: Generate limits (constraints) for free variables θmax and θmin;6: for all o in obstacles do7: Generate language specific obstacle object o;8: end for9: for all component in ArmComponents do

10: globalPositions[0,0] = 0;11: globalPositions[1,0] = 0;12: globalPositions[2,0] = 0;13: globalPositions[3,0] = 1;14: transFormMatrix[,] = component.getTransFormMatrix();15: if component is first component then16: actMat[,] = transFormMatrix;17: globalPosition = MultiplyMatrix(actMat,globalPosition);18: else19: actTrans[,] = transFormMatrix;20: actMat[,] = MultiplyMatrix(actMat,actTrans);21: globalPosition[,] = MultiplyMatrix(actMat,globalPosition);22: end if23: for all obstacle in obstacles do24: Generate minDist = component.a + obstacle.r;25: Generate euclideanDist = d(obstacle,component);26: Generate obstacleAvoidance constraint: euclideanDist ≥ minDist;27: end for28: end for

12

Page 16: Testing COP-Solvers with a Hyper-Redundant Manipulator Model

4 Implementation

The method obj evaluates the objective function and constr the constraints. The cost function

is here defined by

− 102 + 2

√(xg − xe)2 + (yg − ye)2 + (zg − ze)2 (4.1)

An example of the earlier mentioned implementation is

Term* objFunc = (new Add((new Constant(-102.0)),

((new EDist (gPos,endEffector ,3)))));.

For each mathematical function and constant a corresponding object is created, for example

EDist is the Euclidean distance between the last link location (endEffector) and the goal position

(gPos).

Additionally to the objective function, CFSQP expects a constraint function. The constraint

function is similar to the objective function but has additional functions such as less-then-or-

equal (LTE).

4.3 ESKit

Besides the initialization of the start point and the covariance matrix the

ekOptimizer_evaluateFunction(&optim, fitness)

is the most important here.

The variable fitness is a pointer to the function for evaluating the constraint optimization prob-

lem. It has the parameter of the free variables x and the size N for the number of free variables.

In contrast to CFSQP, we do not use different functions for constraints and objective function.

Instead we use the ConstraintUtility class which contains all necessary informations. The fitness

function is defined with respect to the one of Section 4.2.

The optim variable from above is an interface to ESKit. It is used to initialize and update the

solver and also contains the actual results.

4.4 Hybrid-Solver

The implementation of Pseudocode 1 of the Hybrid solver from [5] is straightforward. It is a

combination of AGLIB, a static penalty method and CFSQP.

The implementation of CFSQP is similar to explained in Section 4.2, except the fact that the

initial point of the solver is given by SPM.

13

Page 17: Testing COP-Solvers with a Hyper-Redundant Manipulator Model

4 Implementation

Here, we implemented two versions. The first one takes the complete COP problem into consid-

eration to generate a start point for CFSQP and the other version minimizes only the constraint

violation. In both situations we use the data structure similar to ESKit and ALICA, to provide

the same evaluation effort for all solvers.

4.5 ALICA

The implementation of ALICA-Solver uses an extended and ported version of Alex Shtof’s Au-

toDiff library in C#. This ported version for Linux can be used to describe the objective function

and the constraints.

In order to measure only the execution time of Rprop(∑∧,max) it is necessary to solve a trivial

problem previously. Solving a trivial problem beforehand lets Mono load all necessary classes in

advance.

All information, such as limits, objective function and constraints for the solver are stored in

the ConstraintDescriptor class. The corresponding members and their description can be seen

in Table 4.1.

Table 4.1: Overview of important Rprop(∑∧,max) members

Member Description

StaticRanges limits of free variablesUtility objective functionConstraint limiting constraintsUtilitySufficiencyThreshold abort criteria for precision of the objective function

Because of the fact that Rprop(∑∧,max) is a maximizing solver and the used transfer function

of a constraint problem mentioned in [7] we define the objective function as

102 − 2√

(xg − xe)2 + (yg − ye)2 + (zg − ze)2. The root is the Euclidean distance. The constant

102 is used to make sure that we are within the defined range (the Euclidean distance is much

smaller than 102) of the ALICA-Solver implementation.

14

Page 18: Testing COP-Solvers with a Hyper-Redundant Manipulator Model

5 Evaluation

The results from the evaluation of ALICA-Solver, ESKit, Hybrid solvers and CFSQP-Solver with

the hyper-redundant manipulator model can be seen in Figure 5.1, 5.3, 5.2 and 5.4. For these

results each solver was evaluated with models from 0 up to 80 obstacles. For each number of

obstacles the experiment was repeated 100 times with randomly generated values. The optimiza-

tion was aborted if the solver found a feasible solution within 0.05 of the optimum. Furthermore

the optimization was stopped if there were more than 1.000.000 function evaluations.

The goal positions are placed randomly in the area 0 ≤ x, y, z ≤ 0.4. Obstacles are generated

randomly for each test in the area 0.1 ≤ x ≤ 0.7, 0.0 ≤ y ≤ 0.6 and 0.0 ≤ z ≤ 0.6 with a radius

of 0.01. For a newly generated goal position it is ensured that it is reachable. θi is initialized

with 0 and the value for θmax is +π2 and θmin = −π

2 . Furthermore, we have defined the other

variables, which are necessary for DHM, in Table 5.1.

In addition we developed a CheckConstraint class which ensures that the results of the solvers

are correct. If a solution was incorrect it is deleted from the results and the best one is removed

as well to maintain comparability. We like to mention the two approaches of the hybrid solvers

in Table 5.2. The HybridSolverV1 is referring to the Approach 1 and V2 to the Approach 2 of

Section 3.4. These names are used in the following four figures.

Table 5.1: DHM variables

Name Value

ai 0.075di 0.0αi randomly value between ±π

Table 5.2: Description Hybrid solvers

Name Description

HybridSolverV1 SPM minimizes constraint violationHybridSolverV2 SPM minimizes constraint violation and objective function

In Figure 5.1 the median error is illustrated and the error is equal to the median Euclidean

distance between the goal position and the last link, while Figure 5.2 shows the median time for

15

Page 19: Testing COP-Solvers with a Hyper-Redundant Manipulator Model

5 Evaluation

the solution of each COP. Besides these two graphics we can see the median number of function

evaluations in Figure 5.3 and in Figure 5.4 we see the median execution time with respect to the

amount of function evaluations.

In contrast to ALICA-Solver and ESKit, the CFSQP-Solver and the two hybrid solvers have a

smaller error, but all of the solvers seem to be constant with increasing number of obstacles. This

is because of the mentioned accuracy as an abort criteria. Rprop(∑∧,max) and ESKit checks

the abort accuracy every run and the CFSQP-Solver every fourth. Thus, Rprop(∑∧,max) and

ESKit could take advantage of aborting early if the precision is good enough. This is one reason

that the execution time for these two is better than for the other solvers (Figure 5.2). So we can

say here that the hybrid solvers and the CFSQP-Solver are about two times as precise as the

other solvers with an median error of 0.01 instead of 0.05, but all of them are within limits.

Figure 5.1: Median error

Surprisingly ESKit and ALICA-Solver have an almost equal good execution time. Unlike SQP-

Solver and ALICA-Solver, ESKit is not derivation-based. It uses variance to identify strongly

correlated data, further information in Section 3.3. The explanation of the shortest execution

time –of the ALICA-Solver– is given by the least amount of function evaluations (see Figure 5.3).

Both, ESKit and Rprop(∑∧,max) have a median execution mostly below 100 ms.

Also an interesting observation is that the median time of the CFSQP, HybridSolverV1 and the

16

Page 20: Testing COP-Solvers with a Hyper-Redundant Manipulator Model

5 Evaluation

Figure 5.2: Median time

Figure 5.3: Median function evaluations

17

Page 21: Testing COP-Solvers with a Hyper-Redundant Manipulator Model

5 Evaluation

Figure 5.4: Filtered median time of function evaluations

HybridSolverV2 are quite close, especially above 50 obstacles. The effect that these are quite

similar comes from the fact that CFSQP itself already finds feasible solution fast.

The figures 5.3 and 5.4 explain the best execution time of Rprop(∑∧,max). In Figure 5.4 we

can see that ALICA-Solver needs more time for the same amount of function evaluations in

comparison to ESKit but less in comparison to CFSQP and HybridSolverV2. Rprop(∑∧,max)

is also calculating the derivation, this might be a reason for the higher execution time, with same

amount of function evaluations, in comparison to ESKit. Figure 5.3 shows that Rprop(∑∧,max)

has the smallest amount of function evaluations.

In a comparison of the median number of function evaluations Rprop(∑∧,max) is the best one,

while HybridSolverV1 is the worst. HybridSolverV1, Approach 1, first tries to minimize the

constraint violations, but it seems that the SPM solver AGLIB does not find good solutions, so

the Approach 1 needs to generate new starting points often. The function evaluations of ESKit,

CFSQP and HybridSolverV2 are between 300 and 10.000, so they are higher than ALICA-Solver.

In [5, Section 5.4] Shang is evaluating the HybridSolverV2, based on fminunc and fmincon from

Matlab, and fmincon. In contrast to our DHM, Shang [5] uses a 2–dimensional model and 10

links instead of 12. The obstacles have the same radius, but are randomly placed in the range

−0.6 < x < 0.6 and 0.1 < y < 0.7. The goal position is generated inside the area −0.6 < x < 0.6

18

Page 22: Testing COP-Solvers with a Hyper-Redundant Manipulator Model

5 Evaluation

and 0 < y < 0.6. Further he is limiting the solvers with 10.000 function evaluations and 100

iterations, whereas we are limiting with 1.000.000 function evaluation. The used parameter in

this evaluations are more complex, because we are within a 3–dimensional space.

The median function evaluations is increasing exponentially, while in our evaluation space the

number of function evaluations is linear. Further, except HybridSolverV1 and ESKit, the amount

is less than 3000 evaluations for 80 obstacles and in [5] the number is above 6.000 with 80 obsta-

cles.

The median errors are constantly close to the minimum precision, between 0.02 and 0.04 and

are not increasing with the amount of obstacles. The median error of the solvers in [5] and

Rprop(∑∧,max) are increasing up to 0.04.

Similar to the results in [5], HybridSolverV1 is not as good as HybridSolverV2 or CFSQP. How-

ever, Shang et al. [5] obtained better results for HybridSolverV2 in comparison to the SQP-Solver

and there is no significance difference in our results. Though the results of time and precision in

this paper are quite similar for HybridSolverV2 and CFSQP. Here, the 3D continuous non-linear

problem might be easier than the 2D problem, because the arm has more freedom to avoid the

obstacles.

19

Page 23: Testing COP-Solvers with a Hyper-Redundant Manipulator Model

6 Summary

Based on the results obtained in the evaluation of Chapter 5, we cannot confirm the advantage

of hybrid solvers, as mentioned in [5], in comparison to CFSQP, ALICA-Solver or ESKit. We

can confirm that Rprop(∑∧,max) is a solver with an error within the limits, has the smallest

amount of function evaluation in our DHM and the shortest execution time.

Surprisingly the median execution time of ESKit is drastically lower in contrast to CFSQP and

the hybrid solvers.Also an interesting observation is that the execution time is linearly increasing

for one function evaluation step. This effect is dominating in our evaluation space.

Beyond that, the median number of function evaluations is smaller than in [5]. On the on hand

we use a more complex DHM in comparison to [5]. On the other hand we have less number

function evaluations, further explained in Chapter 5.

Thus, the implementation of CFSQP, Rprop(∑∧,max), ESKit and HybridSolverV2 are better

than the solvers in [5]. Moreover Rprop(∑∧,max) in comparison to CFSQP, ESKit and Hy-

bridSolverV2 in respect to function evaluations and execution time and the error is within the

limits.

20

Page 24: Testing COP-Solvers with a Hyper-Redundant Manipulator Model

Bibliography

[1] T. F. Coleman and Y. Li. On the convergence of interior-reflective Newton methods for

nonlinear minimization subject to bounds. Mathematical Programming, 67(1):189–224, 1994.

[2] N. Hansen and A. Ostermeier. Adapting arbitrary normal mutation distributions in

evolution strategies: the covariance matrix adaptation. In Evolutionary Computation,

1996., Proceedings of IEEE International Conference on, pages 312 –317, May 1996. doi:

10.1109/ICEC.1996.542381. URL http://ieeexplore.ieee.org/xpl/login.jsp?tp=

&arnumber=542381&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%

3Farnumber%3D542381.

[3] Jorge Nocedal and Stephen J. Wright. Numerical optimization, 2006.

[4] Martin Riedmiller and I. Rprop. Rprop - description and implementation details. Technical

report, 1994. URL http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.21.

3428.

[5] Yi Shang, Markus P.J. Fromherz, and Lara Crawford. A new constraint test-case generator

and the importance of hybrid optimizers. European Journal of Operational Research, 173

(2):419 – 443, 2006. ISSN 0377-2217. doi: 10.1016/j.ejor.2005.02.073.

[6] Alex Shtof. Autodiff. http://www.codeproject.com/Articles/179415/Automatic-

Differentiation-using-the-AutoDiff-Libra, 2011 April. URL http://www.codeproject.com/

Articles/179415/Automatic-Differentiation-using-the-AutoDiff-Libra. Last vis-

ited at 2012/07/16.

[7] Hendrik Skubch. Solving non-linear arithmetic constraints in soft realtime environments.

In 27th Symposium On Applied Computing, volume 1 of ACM SAC, page 67–75, Riva del

Garda, Italy, 2012. ACM, ACM.

[8] Hendrik Skubch, Michael Wagner, Roland Reichle, and Kurt Geihs. A modelling language for

cooperative plans in highly dynamic domains. Mechatronics, 21:423–433, 2011. ISSN 0957-

4158. doi: DOI:10.1016/j.mechatronics.2010.10.006. URL http://www.sciencedirect.

com/science/article/B6V43-51NDYV0-1/2/4fcb40d82d704c20132f7ef901b8d337.

[9] Wikipedia. Denavit–hartenberg–diagram. Published in the Wikipedia article about Denavit-

21

Page 25: Testing COP-Solvers with a Hyper-Redundant Manipulator Model

Bibliography

Hartenberg-Convention, 2012 July. URL http://upload.wikimedia.org/wikipedia/

commons/3/3f/Sample_Denavit-Hartenberg_Diagram.png. Last visited at 2012/07/16.

[10] J. Zhou and A.L. Tits. User’s guide for fsqp version 2.0 a fortran code for solving optimiza-

tion problems, possibly minimax, with general inequality constraints and linear equality

constraints, generating feasible iterates. Technical report, 1990.

22