4.3 ANFIS 135 - tuprints

31
4.3 ANFIS 135 Figure 4.30: SA with 215 samples on 12 design variables. Figure 4.31: 170 training, 245 check- ing samples Figure 4.32: 250 training, 165 check- ing samples. Small cluster radius Figure 4.33: 190 training, 30 check- ing samples. Figure 4.34: 220 training, 28 check- ing samples. Small cluster radius Figure 4.35: 170 training, 78 check- ing samples.

Transcript of 4.3 ANFIS 135 - tuprints

Page 1: 4.3 ANFIS 135 - tuprints

4.3 ANFIS 135

Figure 4.30: SA with 215 samples on12 design variables.

Figure 4.31: 170 training, 245 check-ing samples

Figure 4.32: 250 training, 165 check-ing samples. Small cluster radius

Figure 4.33: 190 training, 30 check-ing samples.

Figure 4.34: 220 training, 28 check-ing samples. Small cluster radius

Figure 4.35: 170 training, 78 check-ing samples.

Page 2: 4.3 ANFIS 135 - tuprints

136 4. Approximation Models

Figure 4.36: Surface plot between input variable 1 and 6 and output variable (objective function).

The presented results can be compared to the former section where Bayesian regular-ization networks were applied to the same channel junction optimization problem. Ineither case, networks employing advanced features like regularization or a cluster radius,can considerably improve the approximation capabilities and by using very few functionevaluations also reduce computing time in order to depict the functional coherence of acomplex optimization problem.

From our investigations we can conclude that the ANFIS training with a small networkyields more satisfiable results than a large network. It is recommended that the numberof training samples is at least as large (or better larger) as the number of unknowns, i.e.number of parameters in the network. All our results imply that if this rule is violated,the network tends to perform poorly. By lowering the cluster radius and so working ona number of parameters larger than the number of training samples, the ANFIS is notanymore capable to depict the right functional coherence between input parameters andobjective function. For the largest networks, we chose 30 000 training epochs which tookseveral hours for network training. Still, the smaller the number of parameters, the betterthe network performance. Obviously, the cluster radius and validation technique are aconsiderable improvement for ANFIS approximation abilities.

Figure (4.36) indicates the surface plot between input variables 1 and 6 (also see figure(2.12)) and the pressure drop value (denoted as ‘out1’), which indicates the trained ANFISinput-output coherence. As can be seen, the ANFIS learned that the parameters 1 and6 need to move towards positive values. By both moving into the negative direction, theobjective value massively increases. Such an insight into parameter interdependence is acontribution which is easily provided through the ANFIS algorithm and can hardly beobtained otherwise without employing massive computations.

Page 3: 4.3 ANFIS 135 - tuprints

4.3 ANFIS 137

4.3.4 Conclusions

We presented Bayesian regularization and also adaptive neuro-fuzzy inference systemnetworks to render shape optimization problems for fluid flow processes. After training thenetwork with a sufficient number of samples obtained from a Monte Carlo simulation, andevaluating those with a finite volume solver, we applied an evolutionary strategy which ledclose to the optimum shape. We furthermore applied a specialized evolutionary strategy,i.e. the simulated annealing approach, which also served as an optimization strategy. Theapplication of neural networks for parameter optimization problems is quite recent andhas yet only been investigated by a few authors. Function surrogate models become anincreasingly important ingredient in the context of fluid flow optimization problems. Upto today, these optimization approaches were mostly applied in the context of structuralmechanics. Fluid dynamics are more demanding from a computational point of view andso heuristic methods are most appropriate in this context.

The salient advantage of the considered methods is to save computation time by requir-ing less function evaluations from the finite-volume solver than a conventional optimizerneeds. By considering a representative example, it was shown that, as in case of the chan-nel junction with 12 design variables, only about the half number of function evaluationscompared to a more classical derivative-free optimization method is needed to gain anoptimization result that comes close to the global optimum. Due to Bayesian regulariza-tion networks, which are a major improvement over networks not utilizing regularization,we were able to obtain optimization results with high computational efficiency. In com-parison to the Bayesian regularization networks, the ANFIS approach demonstrated toprovide excellent approximation capabilities. Even without generalization, the networkproved to render a highly complex function approximation tasks. The outstanding fea-ture here was clearly the use of a cluster radius parameter which allowed to control thenetwork complexity. The computations also show that in a few cases, the network failedto give reasonable results. These discrepancies indicate that the network quickly fails togive reasonable answers if the network design and parameters are not properly chosen. InANFIS, for instance, it can be seen that the cluster radius strongly influences the numberof parameters which again determines the overall network performance. Thus, settingup the network is crucial for successful usage; the approximation model approach hencerequires an experienced user.

However, there is still a demand to improve approximation capabilities and to designnetworks which are even more powerful in approximating higher dimensional parameteroptimization problems. For the employment of networks in the function surrogate modelcontext, we here provided an interesting alternative to consider.

Page 4: 4.3 ANFIS 135 - tuprints

Chapter 5

Summary and Outlook

This dissertation is about the application of soft computing methods for shape opti-mization purposes for fluid flow regions in engineering relevant applications. Where softcomputing methods like neural networks and evolutionary algorithms are usually analyzedin computer science and also mathematics, these concepts are not yet thoroughly investi-gated in mechanical engineering aiming at the improvement of real-world applications.

Optimization is difficult when instead of an analytical function a numerical solution de-scribes the objective. This objective can be, for instance, the pressure drop or heatingperformance of a technical device where flow processes take place. In these, the objectivefunction is not expressed analytically in the design variables. This circumstance does notallow for the execution of gradient based methods, neither are methods based on approxi-mation to or substitutes of the derivative considered to be an efficient adoption. The taskwas to provide optimization techniques that support the design of complex systems basedon simulations.

Evolutionary algorithms belong to the class of heuristic methods for which only very fewconvergence proofs have yet been found. This drawback may be neglected when consid-ering practical optimization problems for which scarcely any other methods are suitable.This is because evolutionary algorithms realize a number of advantages over conventionalsearch methods and yield satisfyingly results for practical engineering problems. The mostimportant advantage is here the global search property realized by stochastic based searchand parameter spread operators. EA are the basic ingredient for producing parametersamples that capture a maximum amount of information from the functional coherence ofnumerical solver calls. These function properties have to be discovered since no functioninformation is available in closed form. We emphasize on the fact, that there is virtuallyno a priori information of the original mapping available. So it is of interest if the ob-jective function behaves convex, multi-modal, nonlinear, or may possess other functionalattributes which are important for the optimization model. EA are able to capture highlycomplex problems and can be used to discover these functional attributes. Deterministicmethods in contrast, do not offer any ability to independently detect functional attributes.We demonstrated a variant of EA, the simulated annealing, in structural mechanics de-

138

Page 5: 4.3 ANFIS 135 - tuprints

139

sign improvement and also employed this method for comparing simulated annealing withapproximation models used for the shifted channel junction problem.

Approximation models are taken into consideration since they allow for a rapid functionevaluation instead of employing the computational expensive finite volume solver. Neuralnetworks are in particular able to depict functional coherence with very low computationalcomplexity. The contemporary literature took the network approach into considerationbut has not yet considered progressive networks. This work presents the application ofa Bayesian regularization network which realizes improved generalization features. Thecomputations demonstrated that for the staggered channel, an improvement concerningthe minimization of pressure drop was reached until close to the global minimum. Thereby,the overall computing process required a less number of finite volume evaluations than aconventional sequentiell quadratic programming approach did.

The adaptive neuro-fuzzy inference system (ANFIS) is a further network which allows fora local approximation in the functional coherence surface. The employed cluster radiustechnique is similar to radial basis function networks which also found application in theapplied shape optimization scientific community. The cluster radius essentially determinesthe number of membership functions and thus the total number of parameters in thenetwork system. Our computations showed that the smaller, less complex networks, realizebetter generalization properties than the more complex networks did. This is because acomplex network virtually interpolates data, a characteristic that is not desired: The errormeasure of network output on unseen parameter samples then increases with advancingdata interpolation. The cluster radius parameter can be adjusted such that the mostintensive approximation is done in the most critical regions. Thereby, as the Bayesianregularization network, the ANFIS network also needed less numerical solver calls thanthe derivative-free optimizer did. Both networks succeeded to ameliorate the objectivefunction close to the global minimum.

Neural networks have been used in the past as approximation models, most often radialbasis function network are considered. Although they have mostly found applicationfor rather simple problems as in reliability analysis for two-dimensional storeys, theybecame more and more popular throughout the last decade and were already consideredfor shape optimization. The contribution of this dissertation was to show the abilityof the proposed network models to successfully render demanding shape optimizationproblems. In these, conventional network models completely failed to give a reasonableoptimization result. As our computations discovered, it is generalization that makes anetwork usable for these purposes. Regularization and network complexity are found tobe crucial for successful optimization with approximation models. Since they have notyet found widespread perception, it is due to this work that optimization procedures cannow be improved for challenging optimization problems.

In most real world optimization problems more than one objective has to be considered,multi-objective optimization problems are common and even appear in simple problems.This fact has for long time not be taken into account. We widely discussed that conven-tional methods to approximate the Pareto front quickly prove to be insufficient. It wasconcluded that the Pareto dominance concept has to be taken into consideration for al-

Page 6: 4.3 ANFIS 135 - tuprints

140 5. Summary and Outlook

gorithms in this class. Obviously, evolutionary algorithm operators allow for the accountof the Pareto dominance concept. This Pareto set comprehends design alternatives whichare equally worth. EA first serve as search algorithms - to find the Pareto front - and alsofor decision support - depicted by the spread along the Pareto front.

Our computations for multi-objective problems involved two configurations of heat ex-change units subjected to optimize according to heat increase, pressure drop, covered flowarea and flow velocity. The calculations show a clearly emerging Pareto front betweenpressure drop and covered flow area and also between heat increase and pressure drop.From these observations, the user can choose from a set of alternatives the correspond-ing design with desired features. Knowing the fact about coherence of objectives is alsovaluable, there again is no a priori knowledge about the existence or shape of the Paretofront. Moreover, since each of the proposed designs were evaluated by the numericalsolver, these can be investigated regarding the flow field. This may be interesting inaspects of pre-evaluation of designs in order to learn more about further characteristicssuch as, for instance, stability properties. The employed heat exchange units are exem-plary: In industry, one is interested in the application of more complex designs such as3-D airfoils. From a numerical point of view, the respective scientific fields may alreadyprovide a successful simulation of the flow field for airfoil configurations. However, in arealistic optimization environment they are optimized under many linear and nonlinearconstraints which can make it impossible to find a global optimum [41]. Another exam-ple is the design of a paper machine headbox [151] involving fluid-structure interaction,multiple objectives and nonlinear constraints.

A major aim of numerical simulation is to eventually optimize real-world problems ina computational environment. However, optimization is again a self-contained scientificfield. It is obvious that real-world optimization is usually represented by a very demandingoptimization model for which only specific algorithms can be used. Our contribution hereis to provide engineering science with an insight into only a few problems that may arisewith the optimization task. For the structural mechanics computation, we needed severalthousands of function evaluations until a Pareto front became visible. This is practicallyimpossible to cope for optimizing flow fields since each flow evaluation takes an enormousamount of computation time. We demonstrated the necessity to use evolutionary multi-objective algorithms when confronted with a multi-objective problem. In these, specificoperators are used to ensure a sufficient spread along the Pareto front allowing the engineerin practice to choose from a number of equally worth solutions. This choosing betweenseveral conflicting objectives has up to now only been a matter of intuition. The proposedalgorithms, on the other hand, have been shown to properly detect and describe a Paretofront. In the configurations which were investigated, we explicitly gave the diverse designsand discussed their distinctive features. Thereby, it became obvious how the designsemulate the different objectives. Our computations provided the insight in how far theobjectives influence the final designs. Moreover, we employed a comparison of differentEMO algorithms and also different parameter settings for the heat exchange units. Wediscovered a discontinuous and a non-convex Pareto front and discussed the appropriateoptimization strategy for each problem setting.

Page 7: 4.3 ANFIS 135 - tuprints

141

A future research task is to provide engineering practicioneers with EA operators, EMOalgorithms and approximation models which are designed for the particular purpose athand. Specific features of engineering applications are a long function evaluation time aswell as very sparse information about the functional coherence between design parametersand objective function. Thus, efficient schemes and maximizing exploration of the searchspace have to be prioritized when setting up real-world optimization algorithms. Usually,research in the mentioned fields do not sufficiently take these properties into account. It isnow on the edge of current research to provide algorithms which are specifically designedfor these particular optimization model properties. Furthermore, it is desired to developan integrated optimization environment where real-world problems could be optimizedand evaluated in an easy and automated manner. Clearly, this is a multi-disciplinarytask and incorporates the contribution from diverse scientific fields. Research in thisaspect is just at its beginning and, due to its practical importance, may gather increasedimportance during the next decades.

Page 8: 4.3 ANFIS 135 - tuprints

142 5. Summary and Outlook

Page 9: 4.3 ANFIS 135 - tuprints

List of Figures

2.1 Mapping from decision space into objective space . . . . . . . . . . . . . . 10

2.2 Emerging Pareto-front. Both objectives (f1 and f2) are to be maximized. . 19

2.3 Pareto Front according to a numerical example (cf. section 2.3.1.5), [135] . 20

2.4 Flowchart Optimization Process . . . . . . . . . . . . . . . . . . . . . . . . 21

2.5 Evolutionary Algorithm with three generations moving towards the Paretofront . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

2.6 Flowchart Evolutionary Algorithm . . . . . . . . . . . . . . . . . . . . . . 30

2.7 One-Bit Mutation of a Bitstring . . . . . . . . . . . . . . . . . . . . . . . . 30

2.8 One-Point Crossover of Bitstrings . . . . . . . . . . . . . . . . . . . . . . . 31

2.9 Network Training. The upper three operations are independent from eachother. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

2.10 After training the model is used in an evolutionary strategy. . . . . . . . . 37

2.11 Staggered Channel Junction with six deflection points . . . . . . . . . . . . 43

2.12 Effect of design parameters on shape variation (case with 6 deflection points). 44

2.13 Test example: Tensile bar with centrical hole . . . . . . . . . . . . . . . . . 50

2.14 Optimization runs with different initial geometries . . . . . . . . . . . . . . 51

2.15 2-D Scatter plot: Principle Idea of an Evolutionary Algorithm. Here: Thepopulation moves towards a physical limit, the Pareto front. . . . . . . . . 52

2.16 Comparison of 3 optimization runs with different parameter setup . . . . . 53

2.17 Scatter plot: Identical simulation parameters, varied initial geometries . . . 53

2.18 Optimization runs with different parameter setup (initial geometry pc1,1) . . 54

2.19 Derived geometries using different process configurations. We indicate thecorresponding final designs. . . . . . . . . . . . . . . . . . . . . . . . . . . 54

2.20 Weighted objectives to picture the Pareto front . . . . . . . . . . . . . . . 55

143

Page 10: 4.3 ANFIS 135 - tuprints

144 LIST OF FIGURES

2.21 Chain link optimization setup . . . . . . . . . . . . . . . . . . . . . . . . . 56

2.22 Results of the chain link optimization . . . . . . . . . . . . . . . . . . . . . 57

2.23 Example: Optimization of a thin-walled tube . . . . . . . . . . . . . . . . . 57

2.24 Optimization results: Thin-walled tube with torsion load . . . . . . . . . . 58

3.1 1) Approximation to and 2) diversity on Pareto front . . . . . . . . . . . . 61

3.2 These three individuals are excluded from archive with size 5. . . . . . . . 61

3.3 Distance assignment in NSGA-II . . . . . . . . . . . . . . . . . . . . . . . 66

3.4 Cutout of Flow Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

3.5 3-D scatter plot, scenario 1/1 . . . . . . . . . . . . . . . . . . . . . . . . . 73

3.6 Scatter plot surface area-temperature, scenario 1/1 . . . . . . . . . . . . . 73

3.7 Scatter plot surface area-pressure drop, scenario 1/1 . . . . . . . . . . . . . 73

3.8 Scatter plot pressure drop-temperature, scenario 1/1 . . . . . . . . . . . . 73

3.9 Temperature distribution for individual a3, scenario 1/1 . . . . . . . . . . . 74

3.10 Temperature distribution for individual a1, scenario 1/1 . . . . . . . . . . . 74

3.11 Temperature distribution for individual a2 = b1, scenario 1/1 . . . . . . . . 74

3.12 Temperature distribution for individual b2 = b3, scenario 1/1 . . . . . . . . 74

3.13 3-D scatter plot, scenario 1/2 . . . . . . . . . . . . . . . . . . . . . . . . . 75

3.14 Scatter plot surface area-temperature, scenario 1/2 . . . . . . . . . . . . . 75

3.15 Scatter plot surface area-pressure drop, scenario 1/2 . . . . . . . . . . . . . 75

3.16 Scatter plot pressure drop-temperature, scenario 1/2 . . . . . . . . . . . . 75

3.17 Temperature distribution for individual b3, scenario 1/2 . . . . . . . . . . . 76

3.18 Temperature distribution for individual a2 = a3, scenario 1/2 . . . . . . . . 76

3.19 Temperature distribution for individual b1, scenario 1/2 . . . . . . . . . . . 76

3.20 Temperature distribution for individual a1 = b2, scenario 1/2 . . . . . . . . 76

3.21 3-D scatter plot for scenario 1/3 . . . . . . . . . . . . . . . . . . . . . . . . 77

3.22 Scatter plot surface area-temperature for scenario 1/3 . . . . . . . . . . . . 77

3.23 Scatter plot surface area-pressure drop for scenario 1/3 . . . . . . . . . . . 77

3.24 Scatter plot pressure drop-temperature for scenario 1/3 . . . . . . . . . . . 77

3.25 Temperature distribution: individual a1, scenario 1/3 . . . . . . . . . . . . 78

Page 11: 4.3 ANFIS 135 - tuprints

LIST OF FIGURES 145

3.26 Temperature distribution for individual a3, scenario 1/3 . . . . . . . . . . . 78

3.27 Temperature distribution for individual b2, scenario 1/3 . . . . . . . . . . . 78

3.28 Temperature distribution for individual b3, scenario 1/3 . . . . . . . . . . . 78

3.29 3-D scatter plot for scenario 2/1 . . . . . . . . . . . . . . . . . . . . . . . . 78

3.30 Scatter plot surface area-temperature for scenario 2/1 . . . . . . . . . . . . 78

3.31 Scatter plot surface area-pressure drop for scenario 2/1 . . . . . . . . . . . 79

3.32 Scatter plot pressure drop-temperature for scenario 2/1 . . . . . . . . . . . 79

3.33 Temperature distribution: individual a1, scenario 2/1 . . . . . . . . . . . . 79

3.34 Temperature distribution for individual a2, scenario 2/1 . . . . . . . . . . . 79

3.35 Temperature distribution for individual b1, scenario 2/1 . . . . . . . . . . . 79

3.36 Temperature distribution for individual b2 and b3, scenario 2/1 . . . . . . . 79

3.37 3-D scatter plot for scenario 2/2 . . . . . . . . . . . . . . . . . . . . . . . . 80

3.38 Scatter plot surface area-temperature for scenario 2/2 . . . . . . . . . . . . 80

3.39 Scatter plot surface area-pressure drop for scenario 2/2 . . . . . . . . . . . 80

3.40 Scatter plot pressure drop-temperature for scenario 2/2 . . . . . . . . . . . 80

3.41 3-D scatter plot for scenario 2/3 . . . . . . . . . . . . . . . . . . . . . . . . 81

3.42 Scatter plot surface area-temperature for scenario 2/3 . . . . . . . . . . . . 81

3.43 Scatter plot surface area-pressure drop for scenario 2/3 . . . . . . . . . . . 81

3.44 Scatter plot pressure drop-temperature for scenario 2/3 . . . . . . . . . . . 81

3.45 Temperature distribution for a2, b1 and b3, scenario 2/2 . . . . . . . . . . . 82

3.46 Temperature distribution for a1, a3 and b2, scenario 2/2 . . . . . . . . . . . 82

3.47 Temperature distribution for individual a1, scenario 2/3 . . . . . . . . . . . 82

3.48 Temperature distribution for individual a2, a3, scenario 2/3 . . . . . . . . . 82

3.49 Cutout of Flow Geometry. . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

3.50 18 design variables as indicated. . . . . . . . . . . . . . . . . . . . . . . . . 84

3.51 3-D scatter plot scenario 1, volume, pressure drop and temperature . . . . 86

3.52 3-D scatter plot scenario 1, pressure drop, outflow velocity and temperature 86

3.53 3-D scatter plot scenario 1, volume, pressure drop and temperature . . . . 86

3.54 3-D scatter plot scenario 1, pressure drop, outflow velocity and temperature 86

Page 12: 4.3 ANFIS 135 - tuprints

146 LIST OF FIGURES

3.55 3-D scatter plot scenario 1, volume, pressure drop and temperature . . . . 87

3.56 3-D scatter plot scenario 1, volume, outflow velocity and temperature . . . 87

3.57 3-D scatter plot scenario 1, volume, pressure drop and temperature . . . . 87

3.58 3-D scatter plot scenario 1, volume, pressure drop and temperature . . . . 87

3.59 Scatter plot scenario 1, volume vs. pressure drop . . . . . . . . . . . . . . 88

3.60 Scatter plot scenario 1, volume vs. temperature . . . . . . . . . . . . . . . 88

3.61 Scatter plot scenario 1, volume vs. outflow velocity . . . . . . . . . . . . . 88

3.62 Scatter plot scenario 1, pressure drop vs. temperature . . . . . . . . . . . 88

3.63 Scatter plot scenario 1, pressure drop vs. outflow velocity . . . . . . . . . . 89

3.64 3-D scatter plot scenario 2: volume, pressure drop and temperature . . . . 89

3.65 3-D scatter plot scenario 2: volume, pressure drop and temperature . . . . 90

3.66 3-D scatter plot scenario 2: volume, pressure drop and temperature . . . . 90

3.67 3-D scatter plot scenario 2: volume, pressure drop and temperature . . . . 90

3.68 3-D scatter plot scenario 2: pressure drop, outflow velocity and temperature 90

3.69 Scatter plot scenario 2: volume vs. pressure drop . . . . . . . . . . . . . . 91

3.70 Scatter plot scenario 2: volume vs. temperature . . . . . . . . . . . . . . . 91

3.71 Scatter plot scenario 2: volume vs. outflow velocity . . . . . . . . . . . . . 91

3.72 Scatter plot scenario 2: pressure drop vs. temperature . . . . . . . . . . . 91

3.73 Scatter plot scenario 2: pressure drop vs. outflow velocity . . . . . . . . . . 92

3.74 Scatter plot scenario 2: volume vs. pressure drop . . . . . . . . . . . . . . 92

3.75 3-D scatter plot: SPEA2 on both scenarios . . . . . . . . . . . . . . . . . . 92

3.76 3-D scatter plot: NSGA-II on both scenarios . . . . . . . . . . . . . . . . . 92

3.77 3-D scatter plot: Femo on both scenarios . . . . . . . . . . . . . . . . . . . 93

3.78 3-D scatter plot: SPEA2 on both scenarios . . . . . . . . . . . . . . . . . . 93

3.79 Scatter plot: SPEA2 both scenarios, volume vs. pressure drop . . . . . . . 93

3.80 Scatter plot: SPEA2 both scenarios, volume vs. outflow velocity . . . . . . 93

3.81 Scatter plot: NSGA-II both scenarios, pressure drop vs. temperature . . . 94

3.82 Scatter plot: NSGA-II both scenarios, pressure drop vs. outflow velocity . 94

3.83 Temperature distribution for individual volmin. . . . . . . . . . . . . . . . . 95

Page 13: 4.3 ANFIS 135 - tuprints

LIST OF FIGURES 147

3.84 Flow velocity distribution for individual volmin. . . . . . . . . . . . . . . . . 95

3.85 Temperature distribution for individual pdmin = volmax. . . . . . . . . . . . 96

3.86 Flow velocity distribution for individual pdmin = volmax. . . . . . . . . . . . 96

3.87 Temperature distribution for individual tempmin. . . . . . . . . . . . . . . . 97

3.88 Flow velocity distribution for individual tempmin. . . . . . . . . . . . . . . 97

3.89 Temperature distribution for individual tempmax = pdmax. . . . . . . . . . 98

3.90 Flow velocity distribution for individual tempmax = pdmax. . . . . . . . . . 98

4.1 Sigmoid activation function. . . . . . . . . . . . . . . . . . . . . . . . . . . 103

4.2 Multilayer Perceptron. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

4.3 An Artificial Neuron. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

4.4 Adjustment of α, β. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

4.5 Channel junction (for better illustration we give cut-outs of geometries). . 114

4.6 Optimized channel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

4.7 Network Performance for 16/16 network on 415 samples. . . . . . . . . . . 119

4.8 Network Performance for 32/24 network on 415 samples. . . . . . . . . . . 119

4.9 SA with 196 samples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

4.10 Best result by varying parameters. Pressure drop at 5.7698. . . . . . . . . 120

4.11 12-Neuron Network fed with 243 samples. . . . . . . . . . . . . . . . . . . 120

4.12 24-Neuron Network fed with 243 samples. . . . . . . . . . . . . . . . . . . 120

4.13 48-Neuron Network fed with 243 samples. . . . . . . . . . . . . . . . . . . 120

4.14 48-Neuron Network fed with 100 samples. . . . . . . . . . . . . . . . . . . 120

4.15 12-Neuron Network fed with 104 samples. . . . . . . . . . . . . . . . . . . 121

4.16 24-Neuron Network fed with 104 samples. . . . . . . . . . . . . . . . . . . 121

4.17 SA with 215 samples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

4.18 48-Neuron Network fed with 220 samples. . . . . . . . . . . . . . . . . . . 121

4.19 24-Neuron Network fed with 415 samples. . . . . . . . . . . . . . . . . . . 121

4.20 12-Neuron Network with 248 training samples. . . . . . . . . . . . . . . . . 121

4.21 Gaussian initial membership functions for an inference system. . . . . . . . 124

4.22 A fuzzy inference system . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

Page 14: 4.3 ANFIS 135 - tuprints

148 LIST OF FIGURES

4.23 A Sugeno type fuzzy inference system with two rules for each input. . . . . 128

4.24 Best result by varying parameters. Pressure drop at 5.7698. . . . . . . . . 134

4.25 SA with 196 samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

4.26 243 samples, large cluster radius. . . . . . . . . . . . . . . . . . . . . . . . 134

4.27 243 samples, small cluster radius. . . . . . . . . . . . . . . . . . . . . . . . 134

4.28 96 samples, small cluster radius. . . . . . . . . . . . . . . . . . . . . . . . . 134

4.29 60 training, 36 checking samples. . . . . . . . . . . . . . . . . . . . . . . . 134

4.30 SA with 215 samples on 12 design variables. . . . . . . . . . . . . . . . . . 135

4.31 170 training, 245 checking samples . . . . . . . . . . . . . . . . . . . . . . 135

4.32 250 training, 165 checking samples. Small cluster radius . . . . . . . . . . . 135

4.33 190 training, 30 checking samples. . . . . . . . . . . . . . . . . . . . . . . . 135

4.34 220 training, 28 checking samples. Small cluster radius . . . . . . . . . . . 135

4.35 170 training, 78 checking samples. . . . . . . . . . . . . . . . . . . . . . . . 135

4.36 Surface plot between input variable 1 and 6 and output variable (objectivefunction). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

Page 15: 4.3 ANFIS 135 - tuprints

List of Tables

2.1 Simulation parameters of three optimization runs . . . . . . . . . . . . . . 52

3.1 Scenario Simulation Settings . . . . . . . . . . . . . . . . . . . . . . . . . . 72

3.2 Parameters for simulation runs for scenarios 1 and 2 . . . . . . . . . . . . . 72

3.3 Scenario Simulation Settings . . . . . . . . . . . . . . . . . . . . . . . . . . 85

3.4 Parameters for simulation runs for scenarios 1 and 2 . . . . . . . . . . . . . 85

4.1 DFO Algorithm Performance . . . . . . . . . . . . . . . . . . . . . . . . . 114

4.2 Methods used in this numerical example . . . . . . . . . . . . . . . . . . . 115

4.3 One Layer Network, NF Data Set . . . . . . . . . . . . . . . . . . . . . . . 117

4.4 Two Layer Network, NF Data Set . . . . . . . . . . . . . . . . . . . . . . . 117

4.5 Data Sets used for 6 Design Variables . . . . . . . . . . . . . . . . . . . . . 117

4.6 Data Sets used for Test Problem with 12 Design Variables . . . . . . . . . 118

4.7 One Layer Network, NF algorithm . . . . . . . . . . . . . . . . . . . . . . . 118

4.8 Two Layer Network, NF algorithm . . . . . . . . . . . . . . . . . . . . . . 118

4.9 ENF - One Layer Network . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

4.10 ENF - Two Layer Network . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

4.11 Methods used for numerical example . . . . . . . . . . . . . . . . . . . . . 131

4.12 Data Sets used for 6 Design Variables . . . . . . . . . . . . . . . . . . . . . 131

4.13 ANFIS performance for 243 training samples from Network Feeding . . . . 132

4.14 ANFIS performance for 96 training samples from Network Feeding . . . . . 133

4.15 Data Sets used for Test Problem with 12 Design Variables . . . . . . . . . 133

4.16 ANFIS performance for training samples from NF process . . . . . . . . . 133

4.17 ANFIS performance for training samples from ENF process . . . . . . . . . 133

149

Page 16: 4.3 ANFIS 135 - tuprints

150 LIST OF TABLES

Page 17: 4.3 ANFIS 135 - tuprints

Glossary

Optimization:

A(x) Gradient of constraints IE Index set of equality conditionsB∆(x) A set of values in the II Index set of inequality conditions

neighborhood of x λ Lagrange parameterc(x) Constraints mk(p) Local approximation at iterate xkCp Space of p-times differentiable P∗ Pareto front

functions PS∗ Pareto optimal set

IA Index set of active indices σ Standard Deviation<p Pareto dominance operator

Evolutionary Strategies:

i Individual E Evaluationi′ Offspring individual M Mutationλ Offspring population R Recombinationµ Parent population S SelectionPg Population in generation g V VariationPg,. Pg subjected to an operation z Gaussian deviation vector

Multiobjective Optimization:

F Pareto front R(i) Raw fitnessF (i) Final fitness S(i) Strength valueNarc Archive size σki Distance from individualP ∗ External archive i to individual kri Pareto rank <p Distance operator

Neural Networks:

E Network error η Learning parameterEW Weight decay regularizer µ Membership functionH Hessian matrix νA Indicator functionJ Jacobian matrix w Vector of network weigths

151

Page 18: 4.3 ANFIS 135 - tuprints

152 LIST OF TABLES

Page 19: 4.3 ANFIS 135 - tuprints

Bibliography

[1] J. Andersson Multiobjective Optimization in Engineering Design, PhD-Thesis, De-partment of Mechanical Engineering, Linkoping University, Sweden 2001

[2] W. Annicchiarico and M. Cerrolaza Finite elements, genetic algorithms and β-splines:a combined technique for shape optimization, Finite Elements in Analysis and Design33(2), pp. 125-141, 1999

[3] ANSYS c©, Inc., Software and Handbook. More information under www.ansys.com

[4] D. Applegate, R. Bixby, V. Chvatal, W. Cook On the solution of traveling sales-man problems, In: Documenta Mathematica, Journal der Deutschen Mathematiker-Vereinigung pp. 645–656, 1998

[5] D. Applegate, R. Bixby, V. Chvatal, W. Cook TSP cuts which do not conform to thetemplate paradigm, In: Computational Combinatorial Optimization, M. Junger andD. Naddef (Eds.), Springer, pp. 261–304, 2001

[6] L. Armijo Minimization of functions having lipschitz continuous first partial deriva-tives, Pacific Journal of Mathematics 16 pp. 1-3, 1966

[7] Th. Back Evolutionary Algorithms in Theory and Practice, Oxford University Press,1996

[8] H. Baier, Ch. Seesselberg, and B. Specht Optimierung in der Strukturmechanik,Vieweg 1994

[9] J. C. Bezdek Pattern Recognition with Fuzzy Objective Function Algorithms, PlenumPress, New York, 1981.

[10] E. Blum and W. Oettli Mathematische Optimierung, Springer Textbook, 1975

[11] J. F. Bonnans, J. C. Gilbert, C. Lemarechal, and C. A. Sagastizabal NumericalOptimization. Theoretical and Practical Aspects, Springer Textbook, 2002

[12] J. Borcsok Fuzzy Control. Theorie und Industrieeinsatz, W.-J. Becker (Hrsg.) VerlagTechnik Berlin 2000

[13] M. J. Box A new method of constraint optimization and a comparison with othermethods, Computer Journal 8, pp. 42-52, 1965

153

Page 20: 4.3 ANFIS 135 - tuprints

154 BIBLIOGRAPHY

[14] P. Brousse Optimization in Mechanics: Problems and Methods, North-Holland Seriesin Applied Mathematics and Mechanics, 1985

[15] C. G. Broyden A new double-rank minimization algorithm, Notices of the AmericanMathematical Society 16, 1969

[16] C. Bucher Application of Probability Based Concepts in Computational Mechanics,European Conference on Computational Mechanics ECCM 1999, Munchen, Germany

[17] C. Bucher, J. Will, J. Riedel, and T. Akun Stochastik und Optimierung: Anwen-dung genetischer und stochastischer Verfahren zur multidisziplinaren Optimierung inder Fahrzeugentwicklung, Preprints, Bauhaus Universitat Weimar, Institut fur Struk-turmechanik, Weimar 2002

[18] J.-J. Chattot Computational Aerodynamics and Fluid Dynamics, Springer Series inScientific Computation, 2002

[19] S. Chiu Fuzzy Model Identification Based on Cluster Estimation, Journal of Intelligentand Fuzzy Systems 2(3), 1994

[20] C. Coello An empirical study of evolutionary techniques for multiobjective optmiza-tion in engineering design, Dissertation, Department of Computer Science, TulaneUniversity, 1996

[21] A.R. Conn, K. Scheinberg, and P.L. Toint Recent progress in unconstrained nonlinearoptimization without derivatives, Mathematical Programming, 79:397-414, 1997

[22] A.R. Conn, K. Scheinberg, and P.L. Toint Manual for fortran software package dfov1.2, 2000

[23] A.R. Conn and P.L. Toint An algorithm using quadratic interpolation for uncon-strained derivative free optimization, In: G. di Pillo and F. Gianessi, editors, Nonlin-ear Optimization and Applications, pp. 27-47, New York 1996

[24] I. Das and J. Dennis Closer look at drawbacks of minimizing weighted sums of ob-jectives for Pareto set generation in multicriteria optimization problems, StructuralOptimization 14, pp. 63-69, 1997

[25] K. Deb Evolutionary Algorithms for Multi-Criterion Optimization in Engineering De-sign, Proceedings of Evolutionary Algorithms in Engineering and Computer Science(EUROGEN-99), 1999

[26] K. Deb Multi-objective genetic algorithms: Problem difficulty and construction of testproblems, Evolutionary Computation Journal 7(3), pp. 205-230, 1999

[27] K. Deb, S. Agrawal, A. Pratap, and T. Meyarivan A Fast Elitist Non-dominatedsorting genetic algorithm for multi-objective optimization: NSGA-II, Proceedings ofthe Parallel Problem Solving from Nature VI Conference , Paris, pp. 849–858, 2000

Page 21: 4.3 ANFIS 135 - tuprints

BIBLIOGRAPHY 155

[28] K. Deb An efficient constraint handling method for genetic algorithms, ComputerMethods in Applied Mechanics and Engineering 186 pp. 311-338, 2000

[29] K. Deb and T. Goel Multi-Objective Evolutionary Algorithms for Engineering ShapeDesign, Technical Report, Kanpur Genetic Algorithms Laboratory Department ofMechanical Engineering Indian Institute of Technology, Kanpur, KanGAL ReportNo. 2000003, 2000

[30] K. Deb Genetic Algorithms for Optimization, Technical Report, Kanpur GeneticAlgorithms Laboratory Department of Mechanical Engineering Indian Institute ofTechnology, Kanpur, KanGAL Report Number 2001002, 2001

[31] K. Deb Multi-Objective Optimization using Evolutionary Algorithms, Wiley-Interscience Series in Systems and Optimization, 2001

[32] K. Deb, A. Anand, and D. Joshi A Computationally Efficient Evolutionary Algorithmfor Real-Parameter Optimization, Technical Report, KanGAL Report No. 2002003,2002

[33] K. Deb, M. Mohan, and S. Mithra A Fast Multi-objective Evolutionary Algorithm forFinding Well-Spread Pareto-Optimal Solutions, Technical Report, KanGAL ReportNo. 2003002, 2003

[34] K. Deb, A Population-Based Algorithm-Generator for Real-Parameter Optimization,Technical Report, KanGAL Report Number 2003003, 2003

[35] F. Durst, L. Kadinskii, M. Peric, and M. Schafer, Numerical Study of TransportPhenomena in MOCVD Reactors Using a Finite Volume Multigrid Solver, Journalof Crystal Growth, 125, pp. 612-626, 1992

[36] F. Durst and M. Schafer A Parallel Blockstructured Multigrid Method for the Pre-diction of Incompressible Flows, Int. J. for Num. Meth. in Fluids (22), pp. 549-565,1996

[37] R. Duvigneau and M. Visonneau Hybrid genetic algorithms and artificial neural net-works for complex design optimization in CFD, International Journal for NumericalMethods in Fluids 44, pp. 1257-1278 2004

[38] F. Y. Edgeworth Mathematical Psychics, Kegan Paul, London, 1881

[39] A. E. Eiben and G. Rudolph Theory of evolutionary algorithms: a bird’s eye view,Theoretical Computer Science 229, pp. 3-9 1999

[40] A. E. Eiben Evolutionary Computing: The most Powerful Problem Solver in theUniverse?, Dutch Mathematical Archive, 5(3),No. 2, pp. 126-131, 2002

[41] B. Epstein and S. Peigin Genetic Optimization of Aerodynamic Shapes in EngineeringEnvironment, In: ECCOMAS Conference Proceedings, Jyvaskyla, Finland, 2004

Page 22: 4.3 ANFIS 135 - tuprints

156 BIBLIOGRAPHY

[42] ERCOFTAC Design and Optimization: Methods and Applications, Conference Pro-ceedings, Athens, Greece, 2004

[43] H. Eschenauer, J. Koski, and A. Osyczka Multicriteria Design Optimization. Proce-dures and Applications, Springer 1990

[44] European Congress on Computational Methods in Applied Sciences and EngineeringECCOMAS, Conference Proceedings, Jyvaskyla, Finland, 2004

[45] J. Ferziger and M. Peric, Computational Methods for Fluid Dynamics, Springer,Berlin, 1996

[46] G. B. Fogel and W. Corne (Eds.) Evolutionary Computation in Bioinformatics, Mor-gan Kaufmann, 2003

[47] C. Fonseca and P. Fleming Genetic Algorithms for multiobjective optimization: For-mulation, discussion and generalization, In: S. Forrest (Ed.), Proceedings of the FifthInternational Conference on Genetic Algorithms, California, USA, pp. 416-423, 1993

[48] C. Fonseca and P. Fleming An overview of evolutionary algorithms in multiobjectiveoptimization, Evolutionary Computation 3, pp. 1-18, 1995

[49] R. Fletcher A new approach to variable metric methods, Computer Journal 13, pp.317-322, 1970

[50] F.D. Foresee and M.T. Hagan, Gauss-Newton approximation to Bayesian regulariza-tion, Proceedings of the 1997 International Joint Conference on Neural Networks,pp. 1930-1935, 1997

[51] R. Gerstberger Neuronale Netze aus Sicht der Numerischen Mathematik, DissertationDarmstadt 1999

[52] K.C. Giannakoglou Design of optimal aerodynamic shapes using stochastic optimiza-tion methods and computational intelligence, Progress in Aerospace Sciences 38, pp.43-76, 2002

[53] D. E. Goldberg Genetic Algorithms for Search, Optimization and Machine Learning,Addison-Wesley, 1989

[54] D. Goldfarb A family of variable metric methods derived by variational means, Math-ematics of Computation 24, pp. 23-26, 1970

[55] W. Haase, J. Grashof, G. Wedekind Optimization in Aeronautics, In: ERCOFTACConference Proceedings, Athens, Greece, 2004

[56] Haftka, R.T. and Venkataraman, S. Structural Optimization: What Has Moore’s LawDone For Us?, 43rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynam-ics and Materials Conference, Denver 2002.

Page 23: 4.3 ANFIS 135 - tuprints

BIBLIOGRAPHY 157

[57] F. Hahn Implementierung und Untersuchung einer Methode zur Gestaltoptimierungvon Stromungsgebieten, Studienarbeit, TU Darmstadt, Department of NumericalMethods in Mechanical Engineering, 2003

[58] C. Hagen Neuronale Netze zur Statistischen Datenanalyse, Dissertation Darmstadt1997

[59] M.T. Hagan and M. Menhaj Training feedforward networks with the Marquardt al-gorithm,, IEEE Transactions on Neural Networks Vol. 5 (6), pp. 989-993, 1994

[60] P. Hajela and C.-Y. Lin Genetic search strategies in multicriterion optimal design,Structural Optimization 4, pp. 99-107 1992

[61] P. Hajela Nongradient methods in multidisciplinary design optimization - status andpotential, Journal of Aircraft 36, pp. 255-265, 1999

[62] S. Halgamuge Advanced Methods for Fusion of Fuzzy Systems and Neural Networksin Intelligent Data Processing, Dissertation, TU Darmstadt 1994

[63] L. Harzheim, G. Graf, and J. Liebers Shape200: A Programm to Create Basis Vectorsfor Shape Optimization Using Solution 200 of MSC/Nastran, In: Proceedings of theTenth International Conference on Vehicle Structural Mechanics and CAE, pp. 219-228, USA, 1997

[64] S. Haykin Neural networks, New York, Macmillan, 1995

[65] J. He, L. Kang On the convergence rates of genetic algorithms, Theoretical ComputerScience 229, pp. 23-39, 1999

[66] D. Heiserer, M. Chargin Optimierung von Schweißpunkten unter Fertigungsgesicht-spunkten mittels Verwendung von Neuronalen Netzen, BMW AG Munchen, 2002

[67] H. J. Holland Adaption in Natural and Artificial Systems, an introductory analy-sis with application to biology, control and artificial intelligence, The university ofMichigan Press, Ann Arbor, USA, 1975

[68] J. Horn and N. Nafpliotis Multiobjective optimization using the niched pareto geneticalgorithm, IlliGAL Report 93005, Illinois Genetic Algorithms Laboratory, Universityof Illinois, Urbana, Champaign 1993

[69] J. Horn, N. Nafpliotis, and D. E. Goldberg A niched pareto genetic algorithm formultiobjective optimization, In: Proceedings of the First IEEE Conference on Evolu-tionary Computation, IEEE World Congress on Computational Computation, NewJersey, USA, pp. 82-87, 1994

[70] J. Horn Multicriterion Decision Making, In: T. Back, D. Fogel and Z. Michalewicz(Eds.), Handbook of Evolutionary Computation, pp. F1.9:1-15, 1997

[71] K. Hornik, M. Stinchcombe, and H. White Multilayer feedforward networks are uni-versal approximators, Neural Networks, 2, pp. 359-366, 1989

Page 24: 4.3 ANFIS 135 - tuprints

158 BIBLIOGRAPHY

[72] R. Horst and H. Tuy Global Optimization. Deterministic Approaches, Springer, 1990

[73] M. Hortmann, M. Peric, and G. Scheuerer, Finite Volume Multigrid Prediction ofLaminar Natural Convection: Benchmark Solutions, International Journal of Nu-merical Methods in Fluids, 11, pp. 189-207, 1990

[74] M. Hortmann and M. Schafer, Numerical Prediction of Laminar Flow in Plane, Bi-furcating Channels, Computational Fluid Mechanics, 2, pp. 65-82, 1994

[75] J.E. Hurtado and A.H. Barbat Monte Carlo Techniques in Computational StochasticMechanics, Archives of Computational Methods in Engineering, State of the artreviews, Vol. 5,1, 1998

[76] J.E. Hurtado and D.A. Alvarez Neural-Network-based Reliability Analysis: A Com-parative Study, Computer Methods in Applied Mechanics and Engineering, 191, pp.113-132, 2001

[77] J.E. Hurtado Neural Networks in Stochastic Mechanics, Archives of ComputationalMethods in Engineering, State of the Art reviews, Vol. 8, 3, pp. 303-343, 2001

[78] INVENT Computing GmbH FASTEST - parallel multigrid solver for flows in complexgeometries, Manual, 1984

[79] J. Jahn Vector Optimization, Springer-Verlag, Berlin, 2004

[80] M. Jakiela, C. Chapman, J. Duda, A. Adewuya, and K. Saitou Continuum structuraltopology design with genetic algorithms, Computer Methods in Applied Mechanicsand Engineering 186, pp. 339-356, 2000

[81] S. Jakobs Quantitative Analyse des klassischen genetischen Algorithmus, RWTHAachen, Dissertation, 1995

[82] J. R. Jang ANFIS: Adaptive-Network-Based Fuzzy Inference System, IEEE Transac-tions on Systems, Man, and Cybernetics 23, pp. 665-685, 1993

[83] J. R. Jang and C. Sun, Neuro-Fuzzy Modeling and Control, The Proceedings of theIEEE 83, pp. 378-406, 1995

[84] J. R. Jang and C. Sun, Functional Equivalence Between Radial Basis Function Net-works and Fuzzy Inference Systems IEEE Transactions on Neural Networks 4, pp.156-159, 1993

[85] M. K. Karakasis, A. P. Giotis, and K. C. Giannakoglou Inexact information aided,low-cost, distributed genetic algorithms for aerodynamic shape optimization, Interna-tional Journal for Numerical Methods in Fluids 43, pp. 1149-1166, 2003

[86] S. Kirkpatrick, C.D. Gelatt, and M. P. Vecchi Optimization by Simulated Annealing,Science 220, pp. 671-680, 1983

Page 25: 4.3 ANFIS 135 - tuprints

BIBLIOGRAPHY 159

[87] J. D. Knowles and D. W. Corne. The Pareto Archived Evolution Strategy: A NewBaseline Algorithm for Multiobjective Optimisation, In : Congress on EvolutionaryComputation, pp. 98-105, Washington, D.C., 1999

[88] J. R. Koza Genetic Programming On the Programming of Computers by Means ofNatural Selection MIT Press, 1992

[89] A. Kulkarni and C. D. Cavanaugh Fuzzy Neural Network Models for Classification,Applied Intelligence 12, pp. 207-215, 2000

[90] A. Kulkarni Computer Vision and Fuzzy-Neural Systems, Prentice Hall 2001

[91] M. Laumanns, L. Thiele, E. Zitzler, E. Welzl, and K. Deb Running time analysisof multi-objective evolutionary algorithms on a simple discrete optimization problem,Parallel Problem Solving From Nature — PPSN VII, 2002

[92] M. Laumanns, L. Thiele, K. Deb, and E. Zitzler Archiving with Guaranteed Conver-gence And Diversity in Multi-objective Optimization, GECCO 2002: Proceedings ofthe Genetic and Evolutionary Computation Conference, Morgan Kaufmann Publish-ers, New York, USA, pp. 439-447, 2002

[93] C. Lawrence, J.L. Zhou and A.L. Tits User’s guide for sfsqp version 2.5: A C code forsolving (large scale) constrained nonlinear (minimax) optimization problems, gener-ating iterates satisfying all inequality constraints, Electrical Engineering Departmentand Institute for Systems Research, University of Maryland

[94] T. Lehnhauser and M. Schafer, Improved linear interpolation practice for finite-volume schemes on complex grids, Int. J. of Num. Meth. in Fluids, 38, pp. 625-645,2002

[95] T. Lehnhauser and M. Schafer Efficient discretization of pressure-correction equationson non-orthogonal grids, Int. J. of Num. Meth. in Fluids, 42, pp. 211-231, 2003

[96] Th. Lehnhauser Eine effiziente numerische Methode zur Gestaltsoptimierung vonStromungsgebieten, Dissertation, TU Darmstadt 2003

[97] C.-T. Lin and C. S. Lee Neural-network-based fuzzy logic control and decision systems,IEEE Trans. on Computers 40(12) pp. 1320-1336, 1991

[98] H. Lomax, T. H. Pulliam, and D. W. Zingg Fundamentals of Computational FluidDynamics, Springer Series in Scientific Computing, 2001

[99] J. A. Lozano, P. Larranaga, M. Grana and F. X. AlbizuriGenetic algorithms: bridgingthe convergence gap, Theoretical Computer Science 229 pp. 11-22, 1999

[100] D. J. C. MacKay Bayesian Methods for Adaptive Models, PhD Thesis, CaliforniaInstitute of Technology, 1991

[101] D. J. C. MacKay Information Theory, Inference and Learning Algorithms, Cam-bridge University Press 2003, UK

Page 26: 4.3 ANFIS 135 - tuprints

160 BIBLIOGRAPHY

[102] The MathWorks, Matlab Neural Networks and Fuzzy Logic Toolbox, Software andHandbook, more information under http://www.mathworks.com/

[103] M. Meywerk Optimierung in der Fahrzeugindustrie: Methoden und Anwendungen,Volkswagen AG, Wolfsburg 2000

[104] Z. Michalewicz and D. B. Fogel How to solve it: Modern Heuristics, Springer 2000

[105] K. Miettinen Nonlinear Multiobjective Optimization, Kluwer Academic Publishers,Boston, 1999

[106] J. A. Nelder and R. Mead A simplex method for function minimization, ComputerJournal 7, pp. 308-313, 1965

[107] Neuronale Netze in Ingenieuranwendungen, 2. Internationaler Workshop, Stuttgart,Germany, 1997

[108] No Free Lunch Theorems, A web resource: http://www.no-free-lunch.org/

[109] J. Nocedal and S.J. Wright Numerical Optimization, Springer Series in OperationsResearch, Springer 1999

[110] Computational Intelligence im industriellen Einsatz, VDI Berichte 1526, TagungBaden-Baden 2000

[111] M. Papadrakakis, V. Papadopoulos, and N.D. Lagaros Structural reliability analy-sis of elastic-plastic structures using neural networks and Monte Carlo simulation,Computer Methods in Applied Mechanics and Engineering 136, pp. 145-163, 1996

[112] M. Papadrakakis, N.D. Lagaros, and Y. Tsompanakis Structural Optimization us-ing Evolutionary Strategies and Neural Networks, Computer Methods in AppliedMechanics and Engineering 156, pp. 309-333, 1998

[113] M. Papadrakakis, N.D. Lagaros, Y. Tsompanakis, and V. Plevris Large Scale Struc-tural Optimization: Computational Methods and Optimization Algorithms, Archivesof Computational Methods in Engineering, State of the Art reviews, Vol. 8, 3, pp.239-301, 2001

[114] M. Papadrakakis and N.D. Lagaros Reliability-based structural Optimization usingNeural Networks and Monte Carlo Simulation, Computer Methods in Applied Me-chanics and Engineering, 191, pp. 3491-3507, 2002

[115] V. Pareto The new theory of economics, The Journal of Political Economy 5(4), pp.485-502, 1897

[116] S.V. Patankar and D.B. Spalding, A calculation procedure for heat, mass and mo-mentum transfer in three dimesional parabolic flows, International Journal of HeatMass Transfer, 15 pp. 1787–1806, 1972

Page 27: 4.3 ANFIS 135 - tuprints

BIBLIOGRAPHY 161

[117] A. L. Peressini, F. E. Sullivan and J. J. Uhl The Mathematics of Nonlinear Pro-gramming, Springer Textbook, 1987

[118] O. Pironneau Applied Shape Optimization for Fluids, Oxford University Press, 2001

[119] O. Pironneau and B. Mohammadi Shape Optimization in Fluid Mechanics, AnnualReviews, Interantional Journal of Fluid Mechanics, 36(11), pp. 1-25, 2004

[120] H.J. Pradlwater and G.I. Schueller, Computational Stochastic Mechanics - CurrentDevelopments and Prospects, Computational Mechanics, New Trends and Applica-tions, CIMNE, Barcelona, Spain 1998.

[121] D. Quagliarella, J. Periaux, C. Poloni and G. Winter (Eds.) Genetic Algorithmsand Evolution Strategy in Engineering and Computer Science. Recent Advances andIndustrial Applications., In: EUROGEN 97, John Wiley, Trieste, Italy, 1997

[122] A. Ratle Optimal sampling strategies for learning a fitness model, Proceedings ofthe 1999 Congress on Evolutionary Computation (CEC99), Washington, USA 1999

[123] I. Rechenberg Evolutionsstrategie. Optimierung Technischer Systeme nach Prinzip-ien der Biologischen Evolution, Friedrich Frohmann Verlag 1973

[124] R. Reuter and R. Hoffmann Bewertung von Berechnungsergebnissen mittelsStochastischer Simulationsverfahren, EASi Engineering GmbH, Alzenau, Germany2002

[125] R. Reuter, R. Hoffmann and J. Kamarajan Application of Stochastic Simulation inthe Automotive Industry, EASi Engineering GmbH, Alzenau, Germany 2001

[126] J. L. Ringuest Multiobjective Optimization: Behavioral and Computational Consid-erations, Kluwer, 1992

[127] J.A. Samareh A Survey of Shape Parameterization Techniques, CEAS/ AIAA/ICASE/ NASA Langley International Forum on Aeroelasticity and Structural Dy-namics, Williamsburg 1999.

[128] H. Schade and E. Kunz Stromungslehre, de Gruyter, Berlin, 1989

[129] M. Schafer and S. Turek, Benchmark Computations of Laminar Flow Around aCylinder, Flow Simulation with High-Performance Computers II, in: Notes on Nu-merical Fluid Mechanics, 52, Vieweg, Braunschweig, pp. 547-566, 1996

[130] M. Schafer Numerik im Maschinenbau, Springer Textbook, 1998

[131] M. Schafer and I. Teschauer, Numerical Simulation of Coupled Fluid-Solid Problems,Computer Methods in Applied Mechanics and Engineering, 190, pp. 3645-3667, 2001

[132] J. D. Schaffer Multiple Objective Optimization with Vector Evaluted Genetic Algo-rithms, Ph.D. Thesis, Vanderbilt University, USA, 1984

Page 28: 4.3 ANFIS 135 - tuprints

162 BIBLIOGRAPHY

[133] K. Scheinberg A Derivative free optimization method Technical report, IBM WatsonResearch Center, 2000.

[134] F. Schmid Formoptimierung mit Hilfe stochastischer Simulation, Diplomarbeit,Darmstadt University of Technology, Department of Numerical Methods in Mechan-ical Engineering, 2003

[135] F. Schmid, K. Hirschen. S. Meynen, and M. Schafer An Enhanced Approach ForShape Optimization Using An Evolutionary Algorithm, Finite Elements in Analysisand Design, accepted for publication, 2004

[136] W. Schnell, D. Gross, W. Hauger Technische Mechanik II, 4th edition, SpringerVerlag, 1992

[137] H. Schurmann Konstruktuver Leichtbau, Lecture notes, Technical University ofDarmstadt, 2001

[138] M. Schutz and H.-P. Schwefel Evolutionary approaches to solve three challengingengineering tasks, Computer Methods in Applied Mechanics and Engineering 186,pp. 141-170, 2000

[139] M. Schwehm Globale Optimierung mit Massiv Parallelen Genetischen Algorithmen,FAU Erlangen-Nurnberg, Dissertation 1997

[140] W. Shyy, N. Papila, R. Vaidyanathan and K. Tucker Global design optimization foraerodynamics and rocket propulsion components, Progress in Aerospace Sciences 37,pp. 59-118, 2001

[141] D. F. Shanno Conditioning of quasi-Newton methods for function minimization,Mathematics of Computation 24, pp. 145-160, 1970

[142] W. Spendley, G. R. Hext and F. R. Himsworth Sequentiell application of Simplexdesigns in optimization and evolutionary operation, Technometrics 4, pp. 441-462,1962

[143] J. H. Spurk Stromungslehre, Springer Textbook, 1996

[144] N. Srinivas and K. Deb Multiobjective optimization using non-dominated sorting ingenetic algorithms, Evolutionary Computation 2(3), pp. 221-248, 1994

[145] R. E. Steuer and R. U. Choo An interactive weighted Tchebycheff procedure formultiple objective programming, Mathematical Programming 26, pp. 326-344, 1983

[146] R. E. Steuer Multiple Criteria Optimization: Theory, Computation, and Applica-tion, Wiley, 1986

[147] H. Stone Iterative Solution of Implicit Approximations of Multi-Dimensional PartialDifferential Equations, SIAM Journal on Numerical Analysis, 5, pp. 530-558, 1968

Page 29: 4.3 ANFIS 135 - tuprints

BIBLIOGRAPHY 163

[148] T. Streilein and J. Hillmann Stochastische Simulation und Optimierung am BeispielVW Phaeton, VDI-Tagung, Wurzburg 2002

[149] T. G. Stutzle Local Search Algorithms for Combinatorial Problems - Analysis, Al-gorithms, and New Applications, Dissertation, Diski 220, Infix Verlag, 1998

[150] T. Takagi and M. Sugeno Fuzzy identification of systems and its applications tomodeling and control, IEEE Trans. on Systems, Man, and Cybernetics 15, pp. 116-132, 1985

[151] J. Toivanen, J. Hamalainen, K. Miettinen, P. Tarvainen Designing Paper MachineHeadbox Using GA, Materials and Manufacturing Processes, 18(3), 533-541, 2003

[152] M. W. Trosset I know it when I see it: Toward a Definition of Direct Search Methods,SIAG/OPT Views-and-News: A Forum for the SIAM Activity Group on Optimiza-tion 9, pp. 7-10, 1997

[153] VDI Berichte 1526, Computational Intelligence im industriellen Einsatz, VDI/VDETagung Baden-Baden 2000

[154] D. A. van Veldhuizen Multiobjective Evolutionary Algorithms: Classifications, Anal-ysis, and New Innovations, PhD Thesis, Graduate School of Engineering of the AirForce Institute of Technology, Air University, 1999

[155] L.-X. Wang and J. M. Mendel Back-propagation fuzzy systems as nonlinear dynamicsystem identifiers, In: Proc. of the IEEE International Conference on Fuzzy Systems,San Diego 1992

[156] I. Wegener Komplexitatstheorie. Grenzen der Effizienz von Algorithmen, SpringerVerlag 2003

[157] P. Wilke Soft-Computing: Prinzip, Simulation und Anwendung, Habilitationsschrift,Universitat Erlangen-Nurnberg 1996

[158] D. H. Wolpert and W. G. Macready No Free Lunch Theorems for Optimization,IEEE Transactions on Evolutionary Computation, 1(1), pp. 67-82, 1996.

[159] L. Zadeh, Fuzzy Sets, Information and Control 8, pp. 338-353, 1965

[160] L. Zadeh, Outline of a new Approach to the Analysis of Complex Systems andDecision Processes, IEEE Transactions on Systems, Man, and Cybernetics 3, pp.28-44, 1973

[161] E. Zitzler Evolutionary Algorithms for Multiobjective Optimization: Methods andApplications, Ph.D. Thesis, Shaker Verlag, Aachen, Germany, 1999

[162] E. Zitzler and L. Thiele Multiobjective Evolutionary Algorithms: A ComparativeCase Study and the Strength Pareto Approach, IEEE Transactions on EvolutionaryComputation 3(4), pp. 257-271, 1999

Page 30: 4.3 ANFIS 135 - tuprints

164 BIBLIOGRAPHY

[163] E. Zitzler, K. Deb and L. Thiele Comparison of Multiobjective Evolutionary Algo-rithms: Empirical Results, Evolutionary Computation 8(2), pp. 173-195, 2000

[164] E. Zitzler, M. Laumanns and L. Thiele SPEA2: Improving the Strength ParetoEvolutionary Algorithm, Computer Engineering and Networks Laboratory (TIK),Swiss Federal Institute of Technology (ETH) Zurich, 2001

[165] E. Zitzler, M. Laumanns and L. Thiele SPEA2: Improving the Strength ParetoEvolutionary Algorithm for Multiobjective Optimization, Evolutionary Methods forDesign, Optimisation and Control with Application to Industrial Problems. Proceed-ings of the EUROGEN2001 Conference, Athens, Greece, September 19-21, 2001

[166] E. Zitzler, K. Deb and L. Thiele Combining Convergence and Diversity in Evolu-tionary Multi-Objective Optimization, Evolutionary Computation 10(3), pp. 263-282,2002

[167] E. Zitzler, M. Laumanns, L. Thiele, C. M. Fonseca and V. Grunert da Fonseca WhyQuality Assessment Of Multiobjective Optimizers Is Difficult, GECCO 2002: Pro-ceedings of the Genetic and Evolutionary Computation Conference, Morgan Kauf-mann Publishers, New York, NY, USA, pp. 666-674, 2002

[168] E. Zitzler, M. Laumanns and L. Thiele PISA — A Platform and ProgrammingLanguage Independent Interface for Search Algorithms, Evolutionary Multi-CriterionOptimization (EMO 2003), Lecture Notes in Computer Science, Springer 2003

[169] E. Zitzler, M. Laumanns and L. Thiele A Tutorial on Evolutionary MultiobjectiveOptimization. Workshop on Multiple Objective Metaheuristics, To appear, Springer-Verlag, Berlin, Germany, 2004

Page 31: 4.3 ANFIS 135 - tuprints

Lebenslauf

Personliche DatenName Kai HirschenGeburtsdatum 30.11.1976Familienstand ledigStaatsangehorigkeit deutsch

Hochschulausbildung

seit 03/2003 TU Darmstadt, Fachbereich Maschinenbau,Fachgebiet fur Numerische Berechnungsverfahrenim Maschinenbau,Stipendiat im Graduiertenkolleg 853 “Modellierung,Simulation und Optimierung von Ingenieur-anwendungen”

10/1999 - 03/2003 Universitat Hannover,Studium der Mathematik und OkonomieAbschluß als Diplom-Mathematiker

09/2001 - 09/2002 Brunel University, West-London, UKStudiengang “Computational Mathematicswith Modelling”Abschluß als Master of Science

07/1999 Erwerb der fachgebundenen Hochschulreife, Vordiplom

09/1997 - 09/1999 Universitat Siegen,Studium der Mathematik und Okonomie

Berufliche Tatigkeiten

08/1995 - 08/1997 AMEFA GmbH, Kriftel, MedizinproduktegroßhandelTatigkeit als Einkaufssachbearbeiter

08/1993 - 07/1995 Buderus Guss GmbH, Staffel/Lahn,Ausbildung zum Industriekaufmann

Wehrdienst

09/1996 - 06/1997 Transportbataillon 370 Diez/Lahn

Schulausbildung

07/1995 Erwerb der Fachhochschulreife

08/1995 - 07/1997 Fachoberschule, Abendstufe,Friedrich-Dessauer Schule Limburg/Lahn

08/1987 - 07/1993 Realschule, Leo-Sternberg Schule, Limburg/Lahn

08/1983 - 07/1987 Grundschule Limburg-Offheim