sda17

download sda17

of 20

description

Sa

Transcript of sda17

  • SDA 7: Fuzzy screening systems

    We describe a procedure, which we call the FuzzyScreening method. This procedure is useful in en-vironments in which we must select, from a largeclass of alternatives, a small subset to be furtherinvestigated.

    This initial screening procedure is based on prelim-inary information. The technique suggested hererequires only a non-numerical scale for this theevaluation and selection of alternatives. Using thisprocedure each alternative is evaluated by each ex-pert for satisfaction to his multi-criteria selectionfunction.

    Each criteria can have a different degree of impor-tance. The individual expert evaluations can then

    1

  • be aggregated to obtain an overall evaluation func-tion.

    R.R.Yager, Fuzzy screaning systems, in: R. Lowenand M.Roubens eds., Fuzzy Logic: State of theArt (Kluwer, Dordrecht, 1993) 251-261.

    In screening problems one usually starts with a largesubset, X , of possible alternative solutions.

    Each alternative is essentially represented by a min-imal amount of information supporting its appro-priateness as the best solution.

    This minimal amount of information provided byeach alternative is used to help select a subset A ofX to be further investigated.

    Two prototypical examples of this kind of problem2

  • can be mentioned.

    Job selection problem. Here a large numberof candidates, X , submit a resume, minimal in-formation, to a job announcement.Based upon these resumes a small subset of X ,A, are called in for interviews.These interviews, which provide more detailedinformation, are the basis of selecting winningcandidate from A.

    Proposal selection problem. Here a large classof candidates, X , submit preliminary propos-als, minimal information.Based upon these preliminary proposals a smallsubset of X , A, are requested to submit full de-tailed proposals.These detailed proposals are the basis of select-ing winning candidate from A.

    3

  • In the above examples the process of selecting thesubset A, required to provide further information,is called a screening process.

    Yager suggests a technique, called fuzzy screeningsystem, for managing this screening process.

    This kinds of screening problems described abovebesides being characterized as decision making withminimal information general involve multiple par-ticipants in the selection process. The people whoseopinion must be considered in the selection processare called experts.

    Thus screening problems are a class of multiple ex-pert decision problems. In addition each individualexperts decision is based upon the use of multiplecriteria.

    4

  • So we have ME-MCDM (Multi Expert-Multi CriteriaDecision Making) problem with minimal informa-tion.

    The fact that we have minimal information associ-ated with each of the alternatives complicates theproblem because it limits the operations which canbe performed in the aggregation processes neededto combine the multi-experts as well as multi-criteria.

    The Arrow impossibility theorem

    K.J. Arrow, Social Choice and Individual Val-ues (John Wiley & Sons, New York, 1951).

    is a reflection of this difficulty.

    Yager suggests an approach to the screening prob-lem which allows for the requisite aggregations but

    5

  • which respects the lack of detail provided by theinformation associated with each alternative.

    The technique only requires that preference infor-mation be expressed in by elements draw from ascale that essentially only requires a linear order-ing.

    This property allows the experts to provide infor-mation about satisfactions in the form of a linguis-tic values such as high, medium, low.

    This ability to perform the necessary operationswill only requiring imprecise linguistic preferencevaluations will enable the experts to comfortablyuse the kinds of minimally informative sources ofinformation about the objects described above.

    The fuzzy screening system is a two stage process.

    6

  • In the first stage, individual experts are asked toprovide an evaluation of the alternatives. Thisevaluation consists of a rating for each alterna-tive on each of the criteria.

    In the second stage, we aggregate the individualexperts evaluations to obtain an overall linguis-tic value for each object. This overall evalua-tion can then be used by the decision maker asan aid in the selection process.

    The problem consists of three components.

    The first component is a collectionX = {X1, . . . , Xp},

    of alternative solutions from amongst which wedesire to select some subset to be investigatedfurther.

    7

  • The second component is a groupA = {A1, . . . , Ar},

    of experts or panelists whose opinion solicitedin screening the alternatives.

    The third component is a collectionC = {C1, . . . , Cn},

    of criteria which are considered relevant in thechoice of the objects to be further considered.

    For each alternative each expert is required to pro-vided his opinion. In particular for each alterna-tive an expert is asked to evaluate how well thatalternative satisfies each of the criteria in the setC. These evaluations of alternative satisfaction tocriteria will be given in terms of elements from thefollowing scale S:

    8

  • Very High (VH) S5High (H) S4Medium (M) S3Low S2Very Low S1

    The use of such a scale provides a natural ordering,Si > Sj if i > j and the maximum and minimumof any two scores re defined by

    max(Si, Sj) = Si if Si Sj,min(Si, Sj) = Sj if Sj Si

    Thus for an alternative an expert provides a collec-tion of n values

    9

  • {P1, . . . ,Pn}

    where Pj is the rating of the alternative on the j-thcriteria by the expert. Each Pj is an element in theset of allowable scores S.

    Assuming n = 5, a typical scoring for an alterna-tive from one expert would be:

    (medium, low, medium, very high, low)

    Independent of this evaluation of alternative satis-faction to criteria each expert must assign a mea-sure of importance to each of the criteria. An ex-pert uses the same scale, S, to provide the impor-tance associated with the criteria.

    The next step in the process is to find the overall

    10

  • valuation for a alternative by a given expert.

    A crucial aspect here is the taking of the negationof the importances as

    Neg(Si) = S5i+1.

    For the scale that we are using, we see that thenegation operation provides the following:

    Neg(V H) = V LNeg(H) = LNeg(M) = MNeg(L) = HNeg(V L) = V H

    Then the unit score of each alternative by each ex-pert, denoted by U , is calculated as follows

    11

  • U = minj{Neg(Ij) Pj)}

    where Ij denotes the importance of the j-th critera.

    We note that essentially is an anding of the crite-ria satisfactions modified by the importance of thecriteria.

    The formula can be seen as a measure of the de-gree to which an alternative satisfies the followingproposition:

    All important criteria are satisfied.

    Example 1. Consider some alternative with the fol-lowing scores on four criteria

    12

  • Criteria: C1 C2 C3 C4

    Importance: VH VH M LScore: M L VL VH

    In this case we have

    U = min{Neg(V H) M,Neg(V H) L,

    Neg(M) V L,Neg(L) V H} =

    min{V L M,V L L,M V L,H V H}

    = min{M,L,M, V H} = L13

  • The essential reason for the low performance ofthis object is that it performed low on the secondcriteria which has a very high importance.

    This formulation can be seen as a generalization ofa weighted averaging. Linguistically, this formula-tion is saying that

    If a criterion is important then analternative should score well on it.

    As a result of the first stage, we have for each al-ternative a collection of evaluations

    {X1, X2, . . . , Xr}

    where Xk is the unit evaluation of an alternative bythe k-th expert.

    14

  • In the second stage the technique for combiningthe experts evaluation to obtain an overall evalu-ation for each alternative is based upon the OWAoperators.

    The first step in this process is for the decisionmaker to provide an aggregation function, Q.

    This function can be seen as a generalization ofthe idea of how many experts it feels need to agreeon an alternative for it to be acceptable to pass thescreening process.

    In particular for each number i, where i runs from1 to r, the decision making body must provide avalue Q(i) indicating how satisfied it would be inpassing an alternative that i of the experts were sat-isfied with.

    15

  • The values forQ(i) should be drawn from the scaleS described above.

    It should be noted that Q should have certain char-acteristics to make it rational:

    As more experts agree the decision makers sat-isfaction or confidence should increase

    Q(i) Q(j), i > j. If all the experts are satisfied then his satisfac-

    tion should be the highest possible

    Q(r) = Very High.

    A number for special forms forQ are worth noting:

    If the decision making body requires all experts16

  • to support an alternative then we get

    Q(i) = VL for i < rQ(r) = VH

    If the support of just one expert is enough tomake a alternative worthy of consideration then

    Q(i) = VH for all i

    If at least m experts support is needed for con-sideration then

    Q(i) = VL i < jQ(i) = VH i m

    Having appropriately selected Q we are now in theposition to use the OWA method for aggregatingthe expert opinions.

    Assume we have r experts, each of which has a

    17

  • unit evaluation for a projected denoted Xk.

    The first step in the OWA procedure is to order theXks in descending order, thus we shall denote Bjas the j-th highest score among the experts unitscores for the project.

    To find the overall evaluation for the project, de-noted X , we calculate

    X = maxj{Q(j) Bj}.

    In order to appreciate the workings for this formu-lation we must realize that

    Bj can be seen as the worst of the j-th top scores. Q(j) Bj can be seen as an indication of how

    18

  • important the decision maker feels that the sup-port of at least j experts is.

    The term Q(j) Bj can be seen as a weightingof an objects j best scores, Bj, and the deci-sion maker requirement that j people supportthe project, Q(j).

    Example 2. Assume we have four experts each pro-viding a unit evaluation for a project obtained bythe methodology discussed in the previous section.

    X1 = M,X2 = H,X3 = H,X4 = V H,

    Reording these scores we get

    B1 = V H,B2 = H,B3 = H,B4 = M.

    19

  • Furthermore, we shall assume that our decisionmaking body chooses as its aggregation functionthe average like function QA:

    QA(1) = L (S2)QA(2) = M (S3)QA(3) = V H (S5)QA(4) = V H (S5)

    We calculate the overall evaluation as

    X = max{L V H,M H,V H H,V H M}X = max{L,M,H,M}X = H

    Thus the overall evaluation of this alternative ishigh.

    20