Management Science Overview

66
AN ASSIGNMENT REPORT OF MANAGERIAL SCIENCE 1

Transcript of Management Science Overview

Page 1: Management Science Overview

ANASSIGNMENT REPORT

OFMANAGERIAL SCIENCE

1

Page 2: Management Science Overview

Q1 Management science an overview-

 management science is an interdisciplinary mathematical science that focuses on the effective use of technology by organizations. In contrast, many other science & engineering disciplines focus on technology giving secondary considerations to its use.

Employing techniques from other mathematical sciences – such as mathematical modeling, statistical analysis, and mathematical optimization – operations research arrives at optimal or near-optimal solutions to complex decision-making problems. Because of its emphasis on human-technology interaction and because of its focus on practical applications, operations research has overlap with other disciplines, notably industrial engineering and operations management, and draws on psychology and organization science. Operations Research is often concerned with determining the maximum (of profit, performance, or yield) or minimum (of loss, risk, or cost) of some real-world objective. Originating in military efforts before World War II, its techniques have grown to concern problems in a variety of industries.

Operational research (OR) encompasses a wide range of problem-solving techniques and methods applied in the pursuit of improved decision-making and efficiency.[2] Some of the tools used by operational researchers are statistics, optimization, probability theory, queuing theory, game theory, graph theory, decision analysis, mathematical modeling and simulation. Because of the computational nature of these fields, OR also has strong ties to computer science and analytics. Operational researchers faced with a new problem must determine which of these techniques are most appropriate given the nature of the system, the goals for improvement, and constraints on time and computing power.

2

Page 3: Management Science Overview

Work in operational research and management science may be characterized as one of three categories:[3]

Fundamental or foundational work takes place in three mathematical disciplines: probability theory, mathematical optimization, anddynamical systems theory.

Modeling work is concerned with the construction of models, analyzing them mathematically, implementing them on computers, solving them using software tools, and assessing their effectiveness with data. This level is mainly instrumental, and driven mainly by statistics and econometrics.

Application work in operational research, like other engineering and economics disciplines, attempts to use models to make a practical impact on real-world problems.

The major subdisciplines in modern operational research, as identified by the journal Operations Research,[4] are:

Computing and information technologies Decision analysis Environment, energy, and natural resources Financial engineering Manufacturing , service sciences, and supply chain management Marketing Engineering[5]

Policy modeling and public sector work Revenue management Simulation Stochastic models Transportation

3

Page 4: Management Science Overview

Q2 model in management science- meaning, need, advantages, and pitfalls.

Management science is an applied research designed to solve practical problems of the modern world, rather than to aqcquire knowledge for knowledge's sake. One might say that the goal of the applied scientist is to improve the human condition .

In 1967 Stafford Beer characterized the field of management science as "the business use of operations research".[22] However, in modern times the term management science may also be used to refer to the separate fields of organizational studies or corporate strategy.[citation needed] Like operational research itself, management science (MS), is an interdisciplinary branch of applied mathematics devoted to optimal decision planning, with strong links with economics, business, engineering, and other sciences. It uses various scientific research-based principles, strategies, and analytical methods including mathematical modeling, statistics and numerical algorithms to improve an organization's ability to enact rational and meaningful management decisions by arriving at optimal or near optimal solutions to complex decision problems. In short, management sciences help businesses to achieve their goals using the scientific methods of operational research.

The management scientist's mandate is to use rational, systematic, science-based techniques to inform and improve decisions of all kinds. Of course, the techniques of management science are not restricted to business applications but may be applied to military, medical, public administration, charitable groups, political groups or community groups.

4

Page 5: Management Science Overview

Management science is concerned with developing and applying models and concepts that may prove useful in helping to illuminate management issues and solve managerial problems, as well as designing and developing new and better models of organizational excellence.[23]

The application of these models within the corporate sector became known as Management science.[2

Techniques

Some of the fields that have considerable overlap with Management Science include:

Data mining Decision analysis Engineering Forecasting Game theory Industrial

engineering Logistics

Mathematical modeling

Mathematical optimization Probability and statistics Project management Simulation Social network /Transportation forecasting models Supply chain management

Financial engineering

Advantages:-

1.Better control

5

Page 6: Management Science Overview

2.Better system

3.Better decision

4.Better co-ordination

Pitfalls:-

· Dependence on an Electronic Computer: O.R. techniques try to find out an optimal solution taking into account all the factors. In the modern society, these factors are enormous and expressing them in quantity and establishing relationships among these require voluminous calculations that can only be handled by computers. · Non-Quantifiable Factors: O.R. techniques provide a solution only when all the elements related to a problem can be quantified. All relevant variables do not lend themselves to quantification. Factors that cannot be quantified find no place in O.R. models.· Distance between Manager and Operations Researcher: O.R. being specialist's job requires a mathematician or a statistician, who might not be aware of the business problems. Similarly, a manager fails to understand the complex working of O.R. Thus, there is a gap between the two.· Money and Time Costs: When the basic data are subjected to frequent changes, incorporating them into the O.R. models is a costly affair. Moreover, a fairly good solution at present may be more desirable than a perfect O.R. solution available after sometime.· Implementation: Implementation of decisions is a delicate task. It must take into account the complexities of human relations and behaviour.

6

Page 7: Management Science Overview

Ques-3 integer programming

Integer programming is concerned with optimization problems in which some of the variables are required to take on discrete values. Rather than allow a variable to assume all real values in a given range, only predetermined discrete values within the range are permitted. In most cases, these values are the integers, giving rise to the name of this class of models.

Models with integer variables are very useful. Situations that cannot be modeled by linear programming are easily handled by integer programming. Primary among these involve binary decisions such as yes-no, build-no build or invest-not invest. Although one can model a binary decision in linear programming with a variable that ranges between 0 and 1, there is nothing that keeps the solution from obtaining a fractional value such as 0.5, hardly acceptable to a decision maker. Integer programming requires such a variable to be either 0 or 1, but not in-between.

Integer programming example

In the planning of the monthly production for the next six months a company must, in each month, operate either a normal shift or an extended shift (if it produces at all). A normal shift costs £100,000 per month and can produce up to 5,000 units per month. An extended shift costs £180,000 per month and can produce up to 7,500 units per month. Note here that, for either type of shift, the cost incurred is fixed by a union guarantee agreement and so is independent of the amount produced.

It is estimated that changing from a normal shift in one month to an extended shift in the next month costs an extra £15,000. No extra cost is incurred in changing from an extended shift in one month to a normal shift in the next month.

7

Page 8: Management Science Overview

The cost of holding stock is estimated to be £2 per unit per month (based on the stock held at the end of each month) and the initial stock is 3,000 units (produced by a normal shift). The amount in stock at the end of month 6 should be at least 2,000 units. The demand for the company's product in each of the next six months is estimated to be as shown below:

Month 1 2 3 4 5 6 Demand 6,000 6,500 7,500 7,000 6,000 6,000

Production constraints are such that if the company produces anything in a particular month it must produce at least 2,000 units. If the company wants a production plan for the next six months that avoids stockouts, formulate their problem as an integer program.

Hint: first formulate the problem allowing non-linear constraints and then attempt to make all the constraints linear.

Solution

Variables

The decisions that have to be made relate to:

whether to operate a normal shift or an extended shift in each month; and how much to produce each month.

Hence let:

xt = 1 if we operate a normal shift in month t (t=1,2,...,6) = 0 otherwise yt = 1 if we operate an extended shift in month t (t=1,2,...,6) = 0 otherwise Pt (>= 0) be the amount produced in month t (t=1,2,...,6)

In fact, for this problem, we can ease the formulation by defining three additional variables - namely let:

zt = 1 if we switch from a normal shift in month t-1 to an extended shift in month t (t=1,2,...,6) = 0 otherwise It be the closing inventory (amount of stock left) at the end of month t (t=1,2,...,6)

wt = 1 if we produce in month t, and hence from the production constraints Pt >= 2000 (t=1,2,...,6)

8

Page 9: Management Science Overview

= 0 otherwise (i.e. Pt = 0)

The motivation behind introducing the first two of these variables (zt, It) is that in the objective function we will need terms relating to shift change cost and inventory holding cost. The motivation behind introducing the third of these variables (wt) is the production constraint "either Pt = 0 or Pt >= 2000", which needs a zero-one variable so that it can be dealt with using the standard trick for "either/or" constraints.

In any event formulating an IP tends to be an iterative process and if we have made a mistake in defining variables we will encounter difficulties in formulating the constraints/objective. At that point we can redefine our variables and reformulate.

Constraints

We first express each constraint in words and then in terms of the variables defined above.

only operate (at most) one shift each month

xt + yt <= 1 t=1,2,...,6

Note here that we could not have made do with just one variable (xt say) and defined that variable to be one for a normal shift and zero for an extended shift (since in that case what if we decide not to produce in a particular month?).

Although we could have introduced a variable indicating no shift (normal or extended) operated in a particular month this is not necessary as such a variable is equivalent to 1-xt-yt.

production limits not exceeded

Pt <= 5000xt + 7500yt t=1,2,...,6

Note here the use of addition in the right-hand side of the above equation where we are making use of the fact that at most one of xt and yt can be one and the other must be zero.

no stockouts

It >= 0 t=1,2,...,6

we have an inventory continuity equation of the form

closing stock = opening stock + production - demand

9

Page 10: Management Science Overview

where I0 = 3000. Hence letting Dt = demand in month t (t=1,2,...,6) (a known constant) and assuming

that opening stock in period t = closing stock in period t-1 and that production in period t is available to meet demand in period t

we have that

It = It-1 + Pt - Dt t=1,2,...,6

As noted above this equation assumes that we can meet demand in the current month from goods produced that month. Any time lag between goods being produced and becoming available to meet demand is easily incorporated into the above equation. For example for a 2 month time lag we replace Pt in the above equation by Pt-2 and interpret It as the number of goods in stock at the end of month t which are available to meet demand i.e. goods are not regarded as being in stock until they are available to meet demand. Inventory continuity equations of the type shown are common in production planning problems.

the amount in stock at the end of month 6 should be at least 2000 units

I6 >= 2000

production constraints of the form "either Pt = 0 or Pt >= 2,000".

Here we make use of the standard trick we presented for "either/or" constraints. We have already defined appropriate zero-one variables wt (t=1,2,...,6) and so we merely need the constraints

Pt <= Mwt t=1,2,...,6 Pt >= 2000wt t=1,2,...,6

Here M is a positive constant and represents the most we can produce in any period t (t=1,2,...,6). A convenient value for M for this example is M = 7500 (the most we can produce irrespective of the shift operated).

we also need to relate the shift change variable zt to the shifts being operated

The obvious constraint is

zt = xt-1yt t=1,2,...,6

since as both xt-1 and yt are zero-one variables zt can only take the value one if both xt-

1 and yt are one (i.e. we operate a normal shift in period t-1 and an extended shift in period t). Looking back to the verbal description of zt it is clear that the mathematical

10

Page 11: Management Science Overview

description given above is equivalent to that verbal description. (Note here that we define x0 = 1 (y0 = 0)).

This constraint is non-linear. However we are told that we can first formulate the problem with non-linear constraints and so we proceed. We shall see later how to linearise (generate equivalent linear constraints for) this equation.

Objective

We wish to minimise total cost and this is given by

SUM{t=1,...,6}(100000xt + 180000yt + 15000zt + 2It)

Hence our formulation is complete.

Note that, in practise, we would probably regard It and Pt as taking fractional values and round to get integer values (since they are both large this should be acceptable). Note too here that this is a non-linear integer program.

Comments

In practice a model of this kind would be used on a "rolling horizon" basis whereby every month or so it would be updated and resolved to give a new production plan.

The inventory continuity equation presented is quite flexible, being able to accommodate both time lags (as discussed previously) and wastage. For example if 2% of the stock is wasted each month due to deterioration/pilfering etc then the inventory continuity equation becomes It = 0.98It-1 + Pt - Dt. Note that, if necessary, the objective function can include a term related to 0.02It-1 to account for the loss in financial terms.

In order to linearise (generate equivalent linear constraints) for our non-linear constraint we again use a standard trick. Note that that equation is of the form

A = BC

where A, B and C are zero-one variables. The standard trick is that a non-linear constraint of this type can be replaced by the two linear constraints

A <= (B+C)/2 and A >= B+C-1

11

Page 12: Management Science Overview

To see this we use the fact that as B and C take only zero-one values there are only four possible cases to consider:

B C A = BC A <= (B+C)/2 A >= B+C-1 becomes becomes becomes 0 0 A = 0 A <= 0 A >= -1 0 1 A = 0 A <= 0.5 A >= 0 1 0 A = 0 A <= 0.5 A >= 0 1 1 A = 1 A <= 1 A >= 1

Then, recalling that A can also only take zero-one values, it is clear that in each of the four possible cases the two linear constraints (A <= (B+C)/2 and A >= B+C-1) are equivalent to the single non-linear constraint (A=BC).

Returning now to our original non-linear constraint

zt = xt-1yt

this involves the three zero-one variables zt, xt-1 and yt and so we can use our general rule and replace this non-linear constraint by the two linear constraints

zt <= (xt-1 + yt)/2 t=1,2,...,6and zt >= xt-1 + yt - 1 t=1,2,...,6

Making this change transforms the non-linear integer program given before into an equivalent linear integer program.

A food is manufactured by refining raw oils and blending them together. The raw oils come in two categories:

Vegetable oil:o VEG1o VEG2

Non-vegetable oil:o OIL1o OIL2o OIL3

The prices for buying each oil are given below (in £/tonne)

VEG1 VEG2 OIL1 OIL2 OIL3115 128 132 109 114

The final product sells at £180 per tonne. Vegetable oils and non-vegetable oils require different production lines for refining. It is not possible to refine more than 210 tonnes

12

Page 13: Management Science Overview

of vegetable oils and more than 260 tonnes of non-vegetable oils. There is no loss of weight in the refining process and the cost of refining may be ignored.

There is a technical restriction relating to the hardness of the final product. In the units in which hardness is measured this must lie between 3.5 and 6.2. It is assumed that hardness blends linearly and that the hardness of the raw oils is:

VEG1 VEG2 OIL1 OIL2 OIL38.8 6.2 1.9 4.3 5.1

It is required to determine what to buy and how to blend the raw oils so that the company maximises its profit.

Formulate the above problem as a linear program. (Do not actually solve it). What assumptions do you make in solving this problem by linear programming?

The following extra conditions are imposed on the food manufacture problem stated above as a result of the production process involved:

the food may never be made up of more than 3 raw oils if an oil (vegetable or non-vegetable) is used, at least 30 tonnes of that oil must

be used if either of VEG1 or VEG2 are used then OIL2 must also be used

Introducing 0-1 integer variables extend the linear programming model you have developed to encompass these new extra conditions.

Solution

Variables

We need to decide how much of each oil to use so let xi be the number of tonnes of oil of type i used (i=1,...,5) where i=1 corresponds to VEG1, i=2 corresponds to VEG2, i=3 corresponds to OIL1, i=4 corresponds to OIL2 and i=5 corresponds to OIL3 and where xi >=0 i=1,...,5

Constraints cannot refine more than a certain amount of oil

x1 + x2 <= 210x3 + x4 + x5 <= 260

hardness of the final product must lie between 3.5 and 6.2

13

Page 14: Management Science Overview

(8.8x1 + 6.2x2 + 1.9x3 + 4.3x4 + 5.1x5)/(x1 + x2 + x3 + x4 + x5) >= 3.5 (8.8x1 + 6.2x2 + 1.9x3 + 4.3x4 + 5.1x5)/(x1 + x2 + x3 + x4 + x5) <= 6.2

Objective

The objective is to maximise total profit, i.e.

maximise 180(x1 + x2 + x3 + x4 + x5) - 115x1 - 128x2 - 132x3 - 109x4 - 114x5

The assumptions we make in solving this problem by linear programming are:

all data/numbers are accurate hardness does indeed blend linearly no loss of weight in refining can sell all we produce

Integer program

Variables

In order to deal with the extra conditions we need to decide whether to use an oil or not so let yi = 1 if we use any of oil i (i=1,...,5), 0 otherwise

Constraints must relate the amount used (x variables) to the integer variables (y) that specify

whether any is used or not

x1 <= 210y1 x2 <= 210y2 x3 <= 260y3 x4 <= 260y4 x5 <= 260y5

the food may never be made up of more than 3 raw oils

y1 + y2 + y3 + y4 + y5 <= 3

if an oil (vegetable or non-vegetable) is used, at least 30 tonnes of that oil must be used

xi >= 30yi           i=1,...,5

if either of VEG1 or VEG2 are used then OIL2 must also be used

14

Page 15: Management Science Overview

y4 >= y1

y4 >= y2

Objective

The objective is unchanged by the addition of these extra constraints and variables.

Ques-4 goal programming

Goal programming is a branch of multiobjective optimization, which in turn is a branch of multi-criteria decision analysis (MCDA), also known as multiple-criteria decision making (MCDM). This is an optimization programme. It can be thought of as an extension or generalisation of linear programming to handle multiple, normally conflicting objective measures. Each of these measures is given a goal or target value to be achieved. Unwanted deviations from this set of target values are then minimised in an achievement function. This can be a vector or a weighted sum dependent on the goal programming variant used. As satisfaction of the target is deemed to satisfy the decision maker(s), an underlying satisficing philosophy is assumed.

Variants

The initial goal programming formulations ordered the unwanted deviations into a number of priority levels, with the minimisation of a deviation in a higher priority level being infinitely more important than any deviations in lower priority levels. This is known as lexicographic or pre-emptive goal programming. Ignizio[4] gives an algorithm showing how a lexicographic goal programme can be solved as a series of linear programmes. Lexicographic goal programming should be used when there exists a clear priority ordering amongst the goals to be achieved.

If the decision maker is more interested in direct comparisons of the objectives then Weighted or non pre-emptive goal programming should be used. In this case all the unwanted deviations are multiplied by weights, reflecting their relative importance, and added together as a single sum to form the achievement function. It is important to recognise that deviations measured in different units cannot be summed directly due to the phenomenon of incommensurability.

Hence each unwanted deviation is multiplied by a normalisation constant to allow direct comparison. Popular choices for normalisation constants are the goal target value of

15

Page 16: Management Science Overview

the corresponding objective (hence turning all deviations into percentages) or the range of the corresponding objective (between the best and the worst possible values, hence mapping all deviations onto a zero-one range).[6] For decision makers more interested in obtaining a balance between the competing objectives, Chebyshev goal programming should be used. Introduced by Flavell in 1976,[10] this variant seeks to minimise the maximum unwanted deviation, rather than the sum of deviations. This utilises the Chebyshev distance metric, which emphasizes justice and balance rather than ruthless optimisation.

Strengths and weaknesses

A major strength of goal programming is its simplicity and ease of use. This accounts for the large number of goal programming applications in many and diverse fields. Linear Goal programmes can be solved using linear programming software as either a single linear programme, or in the case of the lexicographic variant, a series of connected linear programmes.

Goal programming can hence handle relatively large numbers of variables, constraints and objectives. A debated weakness is the ability of goal programming to produce solutions that are not Pareto efficient. This violates a fundamental concept of decision theory, that is no rational decision maker will knowingly choose a solution that is not Pareto efficient. However, techniques are available [6][11][12] to detect when this occurs and project the solution onto the Pareto efficient solution in an appropriate manner.

The setting of appropriate weights in the goal programming model is another area that has caused debate, with some authors [13] suggesting the use of the Analytic Hierarchy Process or interactive methods [14] for this purpose.

16

Page 17: Management Science Overview

Ques-5 decision theory- decision making under uncertainty under risk

Theoretical models of decision making under uncertainty are applied to

criminal choice in an effort to determine the variables that can be

manipulated by criminal justice agencies to reduce criminal activity. For

example, three policy alternatives come to mind for the expenditure of

public funds: (1) increase funding for police services, thereby increasing

the probability of apprehension and conviction; (2) establish longer

sentences for crimes (which must necessarily be accompanied by

increased funding for prisons), thereby increasing the cost or penalty

associated with a criminal act; and (3) increase funding for rehabilitative

programs, thereby reducing the proclivity of an individudal to commit

crimes or, for vocational programs, increasing the opportunity costs

17

Page 18: Management Science Overview

(legal income foregone) to the criminal of engaging in illegal activities.

Source of Errors in Decision Making: The main sources of errors in risky decision-making problems are: false assumptions, not having an accurate estimation of the probabilities, relying on expectations, difficulties in measuring the utility function, and forecast errors.

18

Page 19: Management Science Overview

Consider the following Investment Decision-Making Example:

The Investment Decision-Making Example:

States of NatureG

rowthMedi

um GNo

ChangeL

G MG NC LB

onds1

8 7 3

Actions

St 19 5 -2

Deposit

7 7 7 7

The States of Nature are the states of economy during one year. The problem is to decide what action to take among three possible courses of action with the given rates of return as shown in the body of the table.

Coping With Uncertainties

There are a few satisfactory description of uncertainty, one of which is the concept and the algebra of probability.

To make serious business decisions one is to face a future in which ignorance and uncertainty increasingly overpower knowledge, as ones planning horizon recedes into the distance. The deficiencies about our knowledge of the future may be divided into three domains, each with rather murky boundaries:

Risk: One might be able to enumerate the outcomes and figure the probabilities. However, one must lookout for non-normal distributions, especially those with “fat tails”, as illustrated in the stock market by the rare events.

Uncertainty: One might be able to enumerate the outcomes but the probabilities are murky. Most of the time, the best one can do is to give a rank order to possible outcomes and then be careful that one has not omitted one of significance.

19

Page 20: Management Science Overview

Black Swans: The name comes from an Australian genetic anomaly. This is the domain of events which are either “extremely unlikely” or “inconceivable” but when they happen, and they do happen, they have serious consequences, usually bad. An example of the first kind is the Exxon Valdez oil spill, of the second, the radiation accident at Three Mile Island.

In fact, all highly man-made systems, such as, large communications networks, nuclear-powered electric-generating stations and spacecraft are full of hidden “paths to failure”, so numerous that we cannot think of all of them, or not able to afford the time and money required to test for and eliminate them. Individually each of these paths is a black swan, but there are so many of them that the probability of one of them being activated is quite significant.

While making business decisions, we are largely concerned with the domain of risk and usually assume that the probabilities follow normal distributions. However, we must be concerned with all three domains and have an open mind about the shape of the distributions.

Continuum of pure uncertainty and certainty: The domain of decision analysis models falls between two extreme cases. This depends upon the degree of knowledge we have about the outcome of our actions, as shown below:

Ignorance Risky Situation Complete Knowledge_______________________________________________________________Pure Uncertainty Probabilistic DeterministicModel Model Model

One "pole" on this scale is deterministic, such as the carpenter's problem. The opposite "pole" is pure uncertainty. Between these two extremes are problems under risk. The main idea here is that for any given problem, the degree of certainty varies among managers depending upon how much knowledge each one has about the same problem. This reflects the recommendation of a different solution by each person.

Probability is an instrument used to measure the likelihood of occurrence for an event. When you use probability to express your uncertainty, the deterministic side has a probability of 1 (or zero), while the other end has a flat (all equally probable) probability. For example, if you are certain of the occurrence (or non-occurrence) of an event, you use the probability of one (or zero). If you are uncertain, and would use the expression "I really don't know," the event may or may not occur with a probability of 50%. This is the Bayesian notion that probability assessment is always subjective. That is, the probability

20

Page 21: Management Science Overview

always depends upon how much the decision maker knows. If someone knows all there is to know, then the probability will diverge either to 1 or 0.

The decision situations with flat uncertainty have the largest risk. For simplicity, consider a case where there are only two outcomes, with one having a probability of p. Thus, the variation in the states of nature is p(1-p). The largest variation occurs if we set p = 50%, given each outcome an equal chance. In such a case, the quality of information is at its lowest level. Remember from your Statistics course that the quality of information and variation are inversely related. That is, larger variation in data implies lower quality data (i.e. information).

Relevant information and knowledge used to solve a decision problem sharpens our flat probability. Useful information moves the location of a problem from the pure uncertain "pole" towards the deterministic "pole".

Probability assessment is nothing more than the quantification of uncertainty. In other words, quantification of uncertainty allows for the communication of uncertainty between persons. There can be uncertainties regarding events, states of the world, beliefs, and so on. Probability is the tool for both communicating uncertainty and managing it (taming chance).

There are different types of decision models that help to analyze the different scenarios. Depending on the amount and degree of knowledge we have, the three most widely used types are:

Decision-making under pure uncertainty Decision-making under risk Decision-making by buying information (pushing the problem towards the

deterministic "pole")

In decision-making under pure uncertainty, the decision maker has absolutely no knowledge, not even about the likelihood of occurrence for any state of nature. In such situations, the decision-maker's behavior is purely based on his/her attitude toward the unknown. Some of these behaviors are optimistic, pessimistic, and least regret, among others. The most optimistic person I ever met was undoubtedly a young artist in Paris who, without a franc in his pocket, went into a swanky restaurant and ate dozens of oysters in hopes of finding a pearl to pay the bill.

Optimist: The glass is half-full.Pessimist: The glass is half-empty.Manager: The glass is twice as large as it needs to be.

21

Page 22: Management Science Overview

Or, as in the follwoing metaphor of a captain in a rough sea:

The pessimist complains about the wind;the optimist expects it to change;the realist adjusts the sails.

Optimists are right; so are the pessimists. It is up to you to choose which you will be. The optimist sees opportunity in every problem; the pessimist sees problem in every opportunity.

Both optimists and pessimists contribute to our society. The optimist invents the airplane and the pessimist the parachute.

Whenever the decision maker has some knowledge regarding the states of nature, he/she may be able to assign subjective probability for the occurrence of each state of nature. By doing so, the problem is then classified as decision making under risk.

In many cases, the decision-maker may need an expert's judgment to sharpen his/her uncertainties with respect to the likelihood of each state of nature. In such a case, the decision-maker may buy the expert's relevant knowledge in order to make a better decision. The procedure used to incorporate the expert's advice with the decision maker's probabilities assessment is known as the Bayesian approach.

For example, in an investment decision-making situation, one is faced with the following question: What will the state of the economy be next year? Suppose we limit the possibilities to Growth (G), Same (S), or Decline (D). Then, a typical representation of our uncertainty could be depicted as follows:

22

Page 23: Management Science Overview

Decision Making Under Pure UncertaintyIn decision making under pure uncertainty, the decision-maker has no knowledge

regarding any of the states of nature outcomes, and/or it is costly to obtain the needed information. In such cases, the decision making depends merely on the decision-maker's personality type.

Personality Types and Decision Making:

Pessimism, or Conservative (MaxMin). Worse case scenario. Bad things always happen to me.

B 3a) Write min # in each action row, S -2b) Choose max # and do that action. D 7 *

Optimism, or Aggressive (MaxMax). Good things always happen to me.

B 12a) Write max # in each action row, S 15 *b) Choose max # and do that action. D 7

Coefficient of Optimism (Hurwicz's Index), Middle of the road: I am neither too optimistic nor too pessimistic.

a) Choose an between 0 & 1, 1 means optimistic and 0 means pessimistic,

b) Choose largest and smallest # for each action,

c) Multiply largest payoff (row-wise) by and the smallest by (1-),

d) Pick action with largest sum.

For example, for = 0.7, we have

B (.7*12) + (.3*3) = 9.3

S (.7*15) + .3*(-2) =9.9

D (.7*7) + (.3*7) = 7

23

Page 24: Management Science Overview

Minimize Regret: (Savag's Opportunity Loss) I hate regrets and therefore I have to minimize my regrets. My decision should be made so that it is worth repeating. I should only do those things that I feel I could happily repeat. This reduces the chance that the outcome will make me feel regretful, or disappointed, or that it will be an unpleasant surprise.

Regret is the payoff on what would have been the best decision in the circumstances minus the payoff for the actual decision in the circumstances. Therefore, the first step is to setup the regret table:

a) Take the largest number in each states of nature column (say, L).b) Subtract all the numbers in that state of nature column from it (i.e. L - Xi,j).c) Choose maximum number of each action.d) Choose minimum number from step (d) and take that action.

The Regret Matrix

GM

NC L

Bonds (15-12)(9- (7- (7-

4 *

Stocks (15-15)(9- (7- (7+

9

Deposit (15-7)(9- (7- (7-

8

You may try checking your computations using Decision Making Under Pure Uncertainty JavaScript, and then performing some numerical experimentation for a deeper understanding of the concepts.

Limitations of Decision Making under Pure Uncertainty1. Decision analysis in general assumes that the decision-maker faces a decision

problem where he or she must choose at least and at most one option from a set of options. In some cases this limitation can be overcome by formulating the decision making under uncertainty as a zero-sum two-person game.

2. In decision making under pure uncertainty, the decision-maker has no knowledge regarding which state of nature is "most likely" to happen. He or she is probabilistically ignorant concerning the state of nature therefore he or she cannot be

24

Page 25: Management Science Overview

optimistic or pessimistic. In such a case, the decision-maker invokes consideration of security.

3. Notice that any technique used in decision making under pure uncertainties, is appropriate only for the private life decisions. Moreover, the public person (i.e., you, the manager) has to have some knowledge of the state of nature in order to predict the probabilities of the various states of nature. Otherwise, the decision-maker is not capable of making a reasonable and defensible decision.

You might try to use Decision Making Under Uncertainty JavaScript E-lab for checking your computation, performing numerical experimentation for a deeper understanding, and stability analysis of your decision by altering the problem's parameters.

Decision Making Under RiskRisk implies a degree of uncertainty and an inability to fully control the outcomes or

consequences of such an action. Risk or the elimination of risk is an effort that managers employ. However, in some instances the elimination of one risk may increase some other risks. Effective handling of a risk requires its assessment and its subsequent impact on the decision process. The decision process allows the decision-maker to evaluate alternative strategies prior to making any decision. The process is as follows:

1. The problem is defined and all feasible alternatives are considered. The possible outcomes for each alternative are evaluated.

2. Outcomes are discussed based on their monetary payoffs or net gain in reference to assets or time.

3. Various uncertainties are quantified in terms of probabilities.4. The quality of the optimal strategy depends upon the quality of the judgments.

The decision-maker should identify and examine the sensitivity of the optimal strategy with respect to the crucial factors.

Whenever the decision maker has some knowledge regarding the states of nature, he/she may be able to assign subjective probability estimates for the occurrence of each state. In such cases, the problem is classified as decision making under risk. The decision-maker is able to assign probabilities based on the occurrence of the states of nature. The decision making under risk process is as follows:

a) Use the information you have to assign your beliefs (called subjective probabilities) regarding each state of the nature, p(s),

b) Each action has a payoff associated with each of the states of nature X(a,s),

25

Page 26: Management Science Overview

c) We compute the expected payoff, also called the return (R), for each action R(a) = Sums of [X(a,s) p(s)],

d) We accept the principle that we should minimize (or maximize) the expected payoff,

e) Execute the action which minimizes (or maximize) R(a).

26

Page 27: Management Science Overview

Ques-6 decision tree analysis

Decision Tree Approach: A decision tree is a chronological representation of the decision process. It utilizes a network of two types of nodes: decision (choice) nodes (represented by square shapes), and states of nature (chance) nodes (represented by circles). Construct a decision tree utilizing the logic of the problem. For the chance nodes, ensure that the probabilities along any outgoing branch sum to one. Calculate the expected payoffs by rolling the tree backward (i.e., starting at the right and working toward the left).

You may imagine driving your car; starting at the foot of the decision tree and moving to the right along the branches. At each square you have control, to make a decision and then turn the wheel of your car. At each circle, Lady Fortuna takes over the wheel and you are powerless.

Here is a step-by-step description of how to build a decision tree:

1. Draw the decision tree using squares to represent decisions and circles to represent uncertainty,

2. Evaluate the decision tree to make sure all possible outcomes are included,3. Calculate the tree values working from the right side back to the left,4. Calculate the values of uncertain outcome nodes by multiplying the value of

the outcomes by their probability (i.e., expected values).

On the tree, the value of a node can be calculated when we have the values for all the nodes following it. The value for a choice node is the largest value of all nodes immediately following it. The value of a chance node is the expected value of the nodes following that node, using the probability of the arcs. By rolling the tree backward, from its branches toward its root, you can compute the value of all nodes including the root of the tree. Putting these numerical results on the decision tree results in the following graph:

27

Page 28: Management Science Overview

A Typical Decision Tree

Determine the best decision for the tree by starting at its root and going forward.

Based on proceeding decision tree, our decision is as follows:

Hire the consultant, and then wait for the consultant's report. If the report predicts either high or medium sales, then go ahead and manufacture the product. Otherwise, do not manufacture the product.

Check the consultant's efficiency rate by computing the following ratio:

(Expected payoff using consultant dollars amount) / EVPI.

Using the decision tree, the expected payoff if we hire the consultant is:

EP = 1000 - 500 = 500,

EVPI = .2(3000) + .5(2000) + .3(0) = 1600.

Therefore, the efficiency of this consultant is: 500/1600 = 31%

If the manager wishes to rely solely on the marketing research firm's recommendations, then we assign flat prior probability [as opposed to (0.2, 0.5, 0.3) used in our numerical example].

28

Page 29: Management Science Overview

Clearly the manufacturer is concerned with measuring the risk of the above decision, based on decision tree.

Coefficient of Variation as Risk Measuring Tool and Decision Procedure: Based on the above decision, and its decision-tree, one might develop a coefficient of variation (C.V) risk-tree, as depicted below:

Coefficient of Variation as a Risk Measuring Tool and Decision ProcedureClick on the image to enlarge it

Notice that the above risk-tree is extracted from the decision tree, with C.V. numerical value at the nodes relevant to the recommended decision. For example the consultant fee is already subtracted from the payoffs.

From the above risk-tree, we notice that this consulting firm is likely (with probability 0.53) to recommend Bp (a medium sales), and if you decide to manufacture the product then the resulting coefficient of variation is very high (403%), compared with the other branch of the tree (i.e., 251%).

Clearly one must not consider only one consulting firm, rather one must consider several potential consulting during decision-making planning stage. The risk decision tree then is a necessary tool to construct for each consulting firm in order to measure and compare to arrive at the final decision for implementation.

29

Page 30: Management Science Overview

Ques-7 queuing theory

 Queuing Model or Queuing Theory is the mathematical study of waiting lines (or queues) that enables mathematical analysis of several related processes, including arriving at the (back of the) queue, waiting in the queue, and being served by the Service Channels at the front of the queue. What do you mean by Traffic Intensity? The ratio ג/µ is called the traffic intensity or the utilization factor and it determines the degree to which the capacity of service station is utilize Mean Rate of Arrival in  ) Mean Service Rate ( ) the Queue(

Balking If a customer decides not to enter the queue since it is too long is called Balking Reneging If a customer enters the queue but after sometimes loses patience and leaves it is called Reneging Jockeying When there are 2 or more parallel queues and the customers move from one queue to another is called Jockeying

What is Waiting Time Cost & The cost of waitIdle Time Cost? customers include either the indirect cost of lost busin The cost of idle service

facilities is the payment to be made to the servers for the period for which they remaess or direct cost of idle equipment and persons. in idle. What is Transient & Steady State of the system?  Queuing analysis involves the system’s behavior over time. If the operatin If the behavior becomesg characteristics vary with time then it is said to be transient state of the system. independent of its initial conditions (no. of customers in the system) and of the elapsed time is called Steady State condition of the system

Applications of QM Queuing 

Where customers areModel can be applied to various situations :  in Very usefulvolved such as restaurants, café, super market, airports etc.  in Manufacturin Applicable for the problem of machg units ine breakdown & Applicable for the schedulrepairs ing of jobs in  Applicable for the mproduction control in Provide solution ofimization of traffic congestion at tollbooth  inventory control problems

Characteristics of Queuing

 The basic characteristics of queuing phenomenon are: Customer Arrivin Service Channel 2 Major Constituents of a Wait Service Center g 1 ing Line 3 Queuing  Customer LeavSystem Service Facility Customer in Service Channel System Queue g

Elements of Queuing System

30

Page 31: Management Science Overview

Arrival Distribution Service Distribution Service Channel Elements ofQueuing Service Discipline Models Max. No. of Customer allowed Calling Source Customer’s Behavior

Operating Characteristics of QM Operating Characteristics of Queuing Models Waiting Time in Total

Timein Utilization Queue Length System Length the queue the system FactorAssumption in Queuing 

The customer arrive for service at a sModel ingle service facility at random accordin The service discipl The service time has exponential distribution with mean service rate µ. g to Poisson distribution with mean arrival rate ג. in The call Service facility behavior is Normal Customer Behavior is Normal e followed is First Come First Served. ing source has infin The wait The mean arrival rate is less than the mean service rate ite size ing space available for customer in the queue is infinite

Limitations of Queuing  The waitModel in The population of customers may not be The arrival process

may not be stationary The arrival rate may be state dependent g space for the customer is usually limited  infinite and the queuing disciplin Services may not be rendered conte may not be First Come First Serve in Theuously  Queuing system may not have reached the steady state. It may be, instead, in transient state

31

Page 32: Management Science Overview

Ques-8 replacement theory

The Replacement Theory in Operations Research is used in the decision making

process of replacing a used equipment with a substitute; mostly a new equipment of better

usage. The replacement might be necessary due to the deteriorating property or failure or

breakdown of particular equipment. The ‘Replacement Theory’ is used in the cases like;

existing items have out-lived, or it may not be economical anymore to continue with them,

or the items might have been destroyed either by accident or otherwise. The above

discussed situations can be solved mathematically and categorised on some basis like:

1. Items that deteriorate with time e.g. machine tools, vehicles, equipments,

buildings etc,

2. Items becoming out-of-date due to new developments like ordinary weaving

looms by automatic, manual accounting by tally etc.

3. Items which do not deteriorate but fail completely after certain amount of use

like electronic parts, street lights etc (Group Replacement) and

4. The existing working staff in an organization gradually diminishing due to

death, retirement, retrenchment & otherwise (Staff Replacement).

Replacement Policy for Equipments which Deteriorate Gradually

Let us see the first case of gradual failure of items with time. Consider the example

of a Motor Vehicle; the pattern of failure here is progressive in nature i.e. as the life of

vehicle increases; its efficiency decreases. This results in additional expenditure in running

or maintaining this vehicle and at the same time its resale value (also called as scrap value)

also keeps on decreasing. The above case makes this situation a typical case for applying

‘Replacement Theory’.Example:

A transport company purchased a motor vehicle for rupees 80000/-. The resale value

of the vehicle keeps on decreasing from Rs 70000/- in the first year to Rs 5000/- in the

eighth year while, the running cost in maintaining the vehicle keeps on increasing with Rs.

3000/- in the first year till it goes to Rs. 20000/- in the eighth year as shown in the below

table. Determine the optimum replacement policy?

32

Page 33: Management Science Overview

The MS-Excel Files of this Algorithm can be downloaded from the links provided

further in this post. The cost of the equipment is to be entered in the cell B1 (as shown by

the green cell with 80000). Now, enter the scrap values and the running costs as entered in

the green columns C5 to C12 and D5 to D12.  The algorithm will now automatically

calculate the solution which is as shown in the below figure.

The answer can be fetched from the last column. See the pattern; the average total

cost (ATC) at first starts dipping from Rs. 13000/- till it reaches Rs. 11850/- in the cell H8.

From H9 it again starts increasing. This cost at which the ATC is lowest in a particular

year (after which it starts increasing again) gives the optimum replacement period and cost

of the vehicle.

Solution: The vehicle needs to be replaced after four years of its purchase wherein

the cost of maintaining that vehicle would be lowest at an average of Rs 11850/- per year.

Clarification on the Methodology

33

Page 34: Management Science Overview

There are two considerations here. First, the running cost (Rn) is increasing every

year at the same time the vehicle is depreciating in its value. This depreciation is ‘(C-S)’

i.e. in the first year the scrap value of the vehicle is Rs. 70000/- which was purchased for

Rs. 80000/- . So, the vehicle is depreciated by Rs. 10000/- in year one and so on (see

column F).

Thus the total cost in keeping this vehicle is this depreciation and its maintenance.

The maintenance is made cumulative by adding previous years running cost to it every

successive year. Let’s make this simple!

The depreciation is Rs. 10000/- in the first, 19000/- in the second, 25000/- in the

third and so on. See here, the vehicle is depreciated by Rs. 25000/- “by” the third year and

not “in” the third year. Note that the non-cumulative cost of depreciation “in” the third year

would be Rs. 6000/- [Rs. 25000/ minus Rs. 19000/, see the cells F6 and F7]

As, the depreciation in itself is a cumulative function here, we make the running cost

cumulative also. That means the cost of maintaining the vehicle “by” the particular years.

So, the cost of maintaining the vehicle “by” the third year is Rs. 11400/- (D5+D6+ D7 or

3000+3600+4800).

Hence the total cost incurred by the third year would be Rs. 25000 + Rs. 11400 = Rs.

36400 (see cell G7). Finally, the “average cost” of keeping this vehicle for three years

would be 36400 divided by 3 years i.e. Rs. 12133.33 as can be seen from cell H7 and so

on.

34

Page 35: Management Science Overview

Que-9 markov chain

Here we consider a continuous time stochastic process in which the duration of all state changing activities are exponentially distributed. Time is a continuous parameter. The process satisfies the Markovian property and is called a Continuous Time Markov Chain (CTMC). The process is entirely described by a matrix showing the rate of transition from each state to every other state. The rates are the parameters of the associated exponential distributions. The analytical results are very similar to those of a DTMC. The ATM example is continued with illustrations of the elements of the model and the statistical measures that can be obtained from it.

 Finite State First Order Markov Chain Model development:

As mentioned in the introduction, to completely specify the model, all we need to know are the

initial state or probability distribution of the intial state) of the system p(0) = [p1, p2,.... pn] and

the transition probability matrix P.

Here, Pij

represent the constant probability (finite state first order Markov chain) of transition from state Xi (t) to state Xj

(t+1) for any value of t. The Markovian property makes P, time

invariant. Knowing P, we can also construct a transition diagram to represent the system.

Given the initial distribution p(0),

p(1) = p(0) . P

p(2) = p(1).P = p(0).P.P = p(0).P

2

thus, for any k,

p(k) = p(0) . P

35

Page 36: Management Science Overview

k

We also note that the elements of P must satisfy the following conditions:

and Pij

>= 0 for all i and j. AN EXAMPLE: A delicate precision instrument has a component that is subject to random

failure. In fact, if the instrument is working properly at a given moment in time, then with

probability 0.15, it will fail within the next 10 minute period. If the component fails, it can be

replaced by a new one, an operation that also takes 10 minutes. The present supplier of

replacement components does not guarantee that all replacement components are in proper

working condition. The present quality standards are such that about 4% of the components

supplied are defective. However, this can be discovered only after the defective component has

been installed. If, defective, the instrument has to go through a new replacement operation.

Assume that when failure occurs, it always occurs at the end of a 10- minute period.

a) Find the transition probability matrix associated with this process.

b) Given that it was properly working initially, what is the probability of finding the instrument

not in proper working condition after 20 minutes? after 40 minutes?.

6.3 Classification of Finite Markov Chains:

36

Page 37: Management Science Overview

Two states i and j of a system defined by the transition matrix, are said to communicate if j is

accessible from i and i is accessible from j. The number of transitions is not important.

Communication is a class property. If i communicates with both j and k then j communicate with

k. Consider the following transition matrices:

In matrix P1 , states 1 and 2 can be reached from 3 and 4 but once in 1 or 2 it cannot return back

to states 3 or 4. that is the process has been absorbed in the set of states 1 and 2. The set of states 1 and 2 is called an ergodic set (closed set) and the individual states 1 and 2 are called ergodic states. No matter where the process starts out, it will soon end up in the ergodic set. The states 3 and 4 are called transient states and the set 3 and 4 is called the transient set. If we are only interested in the long term behaviour of the system then we can forget about the transient states as the higher powers of the transient matrix will lead to zero in the transient states. It is possible for a process to have more than one ergodic set in a transition matrix.

In matrix P2, the process will be absorbed in either state 1 or 2. This is evident from the one step transition probabilities p11 and p22. A state that has this property is called an absorbing state. The absorbing states can be identified by the 1's in the leading diagonal of the transition matrix.

Consider the following transition matrices: In P3, all states are communicable and form a single ergodic set. While in P4 all the states communicate with each other and also forms an ergodic set. But, the process always moves from state 1 or 2 to state 3 or 4 and vice versa. Such a chain is called cyclic. The long term behaviour of such a process will remain cyclic.

A chain that is not cyclic is called aperiodic. A Markov chain that is aperiodic and irreducible is called regular.

37

Page 38: Management Science Overview

Ques-10 short notes on—

a. Simulation-need, advantages, disadvantages

Simulation in general is to pretend that one deals with a real thing while really working with an imitation. In operations research the imitation is a computer model of the simulated reality. A flight simulator on a PC is also a computer model of some aspects of the flight: it shows on the screen the controls and what the "pilot" (the youngster who operates it) is supposed to see from the "cockpit" (his armchair).

Why to use models? To fly a simulator is safer and cheaper than the real airplane. For precisely this reason, models are used in industry commerce and military: it is very costly, dangerous and often impossible to make experiments with real systems. Provided that models are adequate descriptions of reality (they are valid), experimenting with them can save money, suffering and even time.

When to use simulations? Systems that change with time, such as a gas station where cars come and go (called dynamic systems) and involve randomness. Nobody can guess at exactly which time the next car should arrive at the station, are good candidates for simulation. Modeling complex dynamic systems theoretically need too many simplifications and the emerging models may not be therefore valid. Simulation does not require that many simplifying assumptions, making it the only tool even in absence of randomness.

How to simulate? Suppose we are interested in a gas station. We may describe the behavior of this system graphically by plotting the number of cars in the station; the state of the system. Every time a car arrives the graph increases by one unit while a departing car causes the graph to drop one unit. This graph (called sample path), could be obtained from observation of a real station, but could also be artificially constructed. Such artificial construction and the analysis of the resulting sample path (or more sample paths in more complex cases) consists of the simulation.

Types of simulations: Discrete event. The above sample path consisted of only horizontal and vertical lines, as car arrivals and departures occurred at distinct points of time, what we refer to as events. Between two consecutive events, nothing happens - the graph is horizontal. When the number of events are finite, we call the simulation "discrete event."

In some systems the state changes all the time, not just at the time of some discrete events. For example, the water level in a reservoir with given in and outflows may change

38

Page 39: Management Science Overview

all the time. In such cases "continuous simulation" is more appropriate, although discrete event simulation can serve as an approximation.

Further consideration of discrete event simulations.

How is simulation performed? Simulations may be performed manually. Most often, however, the system model is written either as a computer program (for an example click here) or as some kind of input into simulator software.

System terminology:

State: A variable characterizing an attribute in the system such as level of stock in inventory or number of jobs waiting for processing.

Event: An occurrence at a point in time which may change the state of the system, such as arrival of a customer or start of work on a job.

Entity: An object that passes through the system, such as cars in an intersection or orders in a factory. Often an event (e.g., arrival) is associated with an entity (e.g., customer).

Queue: A queue is not only a physical queue of people, it can also be a task list, a buffer of finished goods waiting for transportation or any place where entities are waiting for something to happen for any reason.

Creating: Creating is causing an arrival of a new entity to the system at some point in time.

Scheduling: Scheduling is the act of assigning a new future event to an existing entity.

Random variable: A random variable is a quantity that is uncertain, such as interarrival time between two incoming flights or number of defective parts in a shipment.

Random variate: A random variate is an artificially generated random variable.

Distribution: A distribution is the mathematical law which governs the probabilistic features of a random variable.

39

Page 40: Management Science Overview

A Simple Example: Building a simulation gas station with a single pump served by a single service man. Assume that arrival of cars as well their service times are random. At first identify the:

states: number of cars waiting for service and number of cars served at any moment

events: arrival of cars, start of service, end of service

entities: these are the cars

queue: the queue of cars in front of the pump, waiting for service

random realizations: interarrival times, service times

distributions: we shall assume exponential distributions for both the interarrival time and service time.

Next, specify what to do at each event. The above example would look like this: At event of entity arrival: Create next arrival. If the server is free, send entity for start of service. Otherwise it joins the queue. At event of service start: Server becomes occupied. Schedule end of service for this entity. At event of service end: Server becomes free. If any entities waiting in queue: remove first entity from the queue; send it for start of service.

Some initiation is still required, for example, the creation of the first arrival. Lastly, the above is translated into code. This is easy with an appropriate library which has subroutines for creation, scheduling, proper timing of events, queue manipulations, random variate generation and statistics collection.

An ithink model for this system might look like the following: 

If you run this model you find it exists in essentially a steady state, and is about as exciting as watching paint dry! 

40

Page 41: Management Science Overview

Now, in the 10th month the company notices its revenue has dropped from $1.5m/month to $1.35m/month and it wonders what has happened. And where do you think it looks for the problem? All around the 10th month of course. And what does it find? The company finds that it still has 120 employees, yet there are now 30 professionals and 90 rookies. A most puzzling situation! 

As it turns out, there was an organizational policy change made in month 3 which seemed to annoy professionals more than in the past, and the quit rate jumped from 10 to 15 professionals a month. The system, with it's built in hiring rule, essentially an auto pilot no thought action, hired one rookie for each professional that quit. What this one time transition in quit rate actually did was set off a 6 month transition within the organization leading to a new equilibrium state with 30 professionals and 90 rookies. The following graph represents this transition. 

41

Page 42: Management Science Overview

Thus, one of the real benefits of modeling and simulation is its ability to accomplish a time and space compression between the interrelationships within a system. This brings into view the results of interactions that would normally escape us because they are not closely related in time and space. Modeling and simulation can provide a way of understanding dynamic complexity!

Common Types of Simulation Application

Application Types:-

• Design and Operation of Queuing Systems

• Managing Inventory Systems

• Estimating the Probability of Completing a Project by the Deadline

• Design and Operation of Manufacturing & Distribution Systems

• Financial Risk Analysis

• Health Care Applications

• Applications to Other Service Industries

• Government service, banking, hotels, restaurants, educational

institutions, disaster planning, the military, amusement parks

Advantages of Simulation

• Interaction of random events: e.g. random occurrence of machine

Breakdowns

• Non-standard distributions: Only simulation gives you the flexibility to

describe events and timings as they occur in real life.

42

Page 43: Management Science Overview

• Communication tool (visualisation, animation). Lets you clearly describe your proposal to others

• It is able to show the behaviour of a system (how the system develops over time) rather than just the end result.

• Makes you think: Simulation provides a vehicle for a discussion about all aspects of a process

DISADVANTAGES

Not optimal solution Problem related to techniques

It is tool to solution not actual solution

Utility of the study depends upon the quality of the model and the skills of the modeller.

• Gathering highly reliable input data can be time consuming and therefore expensive.

• Simulation models do not yield an optimal solution, rather they serve as a

tool for analysis of the behaviour of a system under conditions specified by the experimenter

b. Dynamic programming

Dynamic programming (DP) is a simple yet powerful approach for solving certain

types of sequential optimization problems. Most real-life decision problems are

sequential (dynamic) in nature since a decision made now usually affects future

outcomes and payoffs. An important aspect of optimal sequential decisions is the

43

Page 44: Management Science Overview

desire to balance present costs with the future costs. A decision made now that

minimizes the current cost only without taking into account the future costs may

not necessarily be the optimal decision for the complete multiperiod problem.

For example, in an inventory control problem it may be optimal to order more

than the current period’s demand and incur high inventory carrying costs now in

order to lower the costs of potential shortages that may arise in the future. Thus,

in sequential problems it may be optimal to have some “short-term pain” for the

prospect of “long-term gain.”

Dynamic programming is based on relatively few concepts. The state variables

of a dynamic process completely specify the process and provide information

on all that needs to be known in order to make a decision. For example, in an

inventory control problem the state variable xt may be the inventory level of the

product at the start of period t. Additionally, if the supplier of the product is not4 1. Dynamic Programming

always available, then the supplier’s availability status may also be another state

variable describing the inventory system.

Example:-Optimal consumption and saving

A mathematical optimization problem that is often used in teaching dynamic programming

to economists (because it can be solved by hand[8]) concerns a consumer who lives over

the periods t = 0,1,2,...,T and must decide how much to consume and how much to save

in each period.

Let ct be consumption in period t, and assume consumption yields utility u(ct) = ln(ct) as

long as the consumer lives. Assume the consumer is impatient, so that

he discounts future utility by a factor b each period, where 0 < b < 1. Let kt be capital in

period t. Assume initial capital is a given amount k0 > 0, and suppose that this period's

44

Page 45: Management Science Overview

capital and consumption determine next period's capital as  , where A is

a positive constant and 0 < a < 1. Assume capital cannot be negative. Then the

consumer's decision problem can be written as follows:

 subject to   for all t = 0,1,2,...,T

Written this way, the problem looks complicated, because it involves solving for all the

choice variables c0,c1,c2,...,cT and k1,k2,k3,...,kT + 1simultaneously. (Note that k0 is not a

choice variable—the consumer's initial capital is taken as given.)

The dynamic programming approach to solving this problem involves breaking it apart

into a sequence of smaller decisions. To do so, we define a sequence of value

functions Vt(k), for t = 0,1,2,...,T,T + 1 which represent the value of having any

amount of capital k at each time t. Note that VT + 1(k) = 0, that is, there is (by

assumption) no utility from having capital after death.

The value of any quantity of capital at any previous time can be calculated

by backward induction using the Bellman equation. In this problem, for each t = 0,1,2,...,T, the Bellman equation is

 subject to 

This problem is much simpler than the one we wrote down before, because it

involves only two decision variables, ct and kt + 1. Intuitively, instead of choosing his

whole lifetime plan at birth, the consumer can take things one step at a time. At

time t, his current capital kt is given, and he only needs to choose current

consumption ct and saving kt + 1.

To actually solve this problem, we work backwards. For simplicity, the current level

of capital is denoted as k. VT + 1(k) is already known, so using the Bellman

equation once we can calculate VT(k), and so on until we get to V0(k), which is

the value of the initial decision problem for the whole lifetime. In other words, once

we know VT − j + 1(k), we can calculate VT − j(k), which is the maximum ofln(cT − j) + bVT − j + 1(Aka − cT − j), where cT − j is the choice variable and  .

Working backwards, it can be shown that the value function at time t = T − j is

45

Page 46: Management Science Overview

where each vT − j is a constant, and the optimal amount to consume at

time t = T − j is

which can be simplified to

, and  , and  , etc.

We see that it is optimal to consume a larger fraction of current wealth

as one gets older, finally consuming all remaining wealth in period T,

the last period of life.

c. Non-linear programming.

When expressions defining the objective function or constraints of an optimization model are not linear, one has a nonlinear programming model. Again, the class of situations appropriate for nonlinear programming is much larger than the class for linear programming. Indeed it can be argued that all linear expressions are really approximations for nonlinear ones.

Since nonlinear functions can assume such a wide variety of functional forms, there are many different classes of nonlinear programming models. The specific form has much to do with how easily the problem is solve, but in general a nonlinear programming model is much more difficult to solve than a similarly sized linear programming model.

Mathematical formulation of the problem

The problem can be stated simply as:

 to maximize some variable such as product throughput

or

 to minimize a cost function

46

Page 47: Management Science Overview

where

Methods for solving the problem

If the objective function f is linear and the constrained space is a polytope,

the problem is a linear programming problem, which may be solved using

well known linear programming solutions.

If the objective function is concave (maximization problem),

or convex (minimization problem) and the constraint set is convex, then the

program is called convex and general methods from convex

optimization can be used in most cases.

If the objective function is a ratio of a concave and a convex function (in the

maximization case) and the constraints are convex, then the problem can

be transformed to a convex optimization problem using fractional

programming techniques.

Several methods are available for solving nonconvex problems. One

approach is to use special formulations of linear programming problems.

Another method involves the use of branch and bound techniques, where

the program is divided into subclasses to be solved with convex

(minimization problem) or linear approximations that form a lower bound on

the overall cost within the subdivision. With subsequent divisions, at some

point an actual solution will be obtained whose cost is equal to the best

lower bound obtained for any of the approximate solutions. This solution is

optimal, although possibly not unique. The algorithm may also be stopped

early, with the assurance that the best possible solution is within a

tolerance from the best point found; such points are called ε-optimal.

Terminating to ε-optimal points is typically necessary to ensure finite

termination. This is especially useful for large, difficult problems and

problems with uncertain costs or values where the uncertainty can be

estimated with an appropriate reliability estimation.

Under differentiability and constraint qualifications, the Karush–Kuhn–

Tucker (KKT) conditions provide necessary conditions for a solution to be

optimal. Under convexity, these conditions are also sufficient.

47

Page 48: Management Science Overview

2-dimensional example

The intersection of the line with the constrained space represents the solution

A simple problem can be defined by the constraints

x1 ≥ 0

x2 ≥ 0

x12 + x2

2 ≥ 1

x12 + x2

2 ≤ 2

with an objective function to be maximized

f(x) = x1 + x2

where x = (x1, x2). 

3-dimensional example

48

Page 49: Management Science Overview

The intersection of the top surface with the constrained space in the center

represents the solution

Another simple problem can be defined by the constraints

x12 − x2

2 + x32 ≤ 2

x12 + x2

2 + x32 ≤ 10

with an objective function to be maximized

f(x) = x1x2 + x2x3

where x = (x1, x2, x3).

49