What is Sigma Sevel

2
http://www.smartersolutions.com/pdfs/online_database/vidasset.php? documentid=194 The Many Faces of Capability Indices ------------------------------------------------------------------------ ----------------------------------------------- What is sigma sevel? Why is it not taught correctly? Written by Rick Haynes on April 21, 2011 – 9:12 pm So many people have come to Lean Six Sigma without the history of the sigma level. I am not sure that most instructors even know the truth. Most current instructors are an example of Deming’s fourth rule of the funnel: they know only a fraction of what their teacher knew. No one is going back to the beginning to learn it right. I posted on a LinkedIn discussion today from a person wanting to show the mathematical linkage between Cpk and the sigma level estimate. It was quite an effort, in that it included Z-Bench values and the 1.5 sigma shift. For all its detail, its basis was wrong. It was like the proof that 1+1 =1 (if you have seen it). Sigma level is tossed around these days with no true understanding. The lower quality six sigma programs have sort of equated the sigma level with the number of standard deviations away from the mean (a z-score ). I am an old six sigma guy and was there when it started. Motorola derived the sigma level from attribute tracking of quality. There was no Cp Cpks or other continuous data analysis. Sigma level is an attribute capability measure. Here is the short story. You took a process or product. Determined the process yield. Using a Poisson model you solved for the DPU (defects per unit). This worked because the yield was the probability of a Poisson distribution with a mean of DPU and a count of zero. Notice that this assumes that there are potentially more than one defect per unit or product. While no one really counts this, it is either bad or good. Using an assumption that there was a single quality level of performance for every action within the process, the facility and so forth, it is assumed that the final quality level is a function of the complexity of the item. Simple things have better yields while complex items have lower yields. Then you determined the process or product opportunity count. This was intended to be a count of the total number of opportunities that existed to create a defect. Then you took the DPU for every product and divided it by the opportunity count for that product, then multiplied by

description

sigma sevel

Transcript of What is Sigma Sevel

What is sigma sevel

http://www.smartersolutions.com/pdfs/online_database/vidasset.php?documentid=194The Many Faces of Capability Indices-----------------------------------------------------------------------------------------------------------------------

What is sigma sevel? Why is it not taught correctly?

Written by Rick Haynes on April 21, 2011 9:12 pm

So many people have come to Lean Six Sigma without the history of the sigma level. I am not sure that most instructors even know the truth. Most current instructors are an example of Demings fourth rule of the funnel: they know only a fraction of what their teacher knew. No one is going back to the beginning to learn it right.

I posted on a LinkedIn discussion today from a person wanting to show the mathematical linkage between Cpk and the sigma level estimate. It was quite an effort, in that it included Z-Bench values and the 1.5 sigma shift. For all its detail, its basis was wrong. It was like the proof that 1+1 =1 (if you have seen it).

Sigma level is tossed around these days with no true understanding. The lower quality six sigma programs have sort of equated the sigma level with the number of standard deviations away from the mean (a z-score). I am an old six sigma guy and was there when it started. Motorola derived the sigma level from attribute tracking of quality. There was no Cp Cpks or other continuous data analysis. Sigma level is an attribute capability measure.

Here is the short story. You took a process or product. Determined the process yield. Using a Poisson model you solved for the DPU (defects per unit). This worked because the yield was the probability of a Poisson distribution with a mean of DPU and a count of zero. Notice that this assumes that there are potentially more than one defect per unit or product. While no one really counts this, it is either bad or good.

Using an assumption that there was a single quality level of performance for every action within the process, the facility and so forth, it is assumed that the final quality level is a function of the complexity of the item. Simple things have better yields while complex items have lower yields.

Then you determined the process or product opportunity count. This was intended to be a count of the total number of opportunities that existed to create a defect. Then you took the DPU for every product and divided it by the opportunity count for that product, then multiplied by 1,000,000. This new value was called the defects per million opportunities (DPMO). The belief is that everything in the organization has about the same DPMO so that you can combine dissimilar products and services based on this unit to come up with the facility DPMO.

DPMO was then converted to sigma level. You used a normal probability table to look up the DPMO value to determine the equivalent standard deviations away from a normal distribution which would produce that level of defectives (at attribute conversion to an equivalent normal distribution). If the defect data collected over a short period of time (and included only short-term variability), you would consider the z-value as the sigma level. If you made the assessment over a long period (using long-term variability) then you would add 1.5 sigma to the z-value before reporting the sigma level. It was shown, back then, that long-term data capability could be approximated by dropping the short-term variability z-score by about 1.5 sigma.

This is why a six sigma process has 3.4 DPMO or a .0000034 probability defective, which is a 4.5 z-score. The assumption was that a short-term capability for a single opportunity was equal to a z-score of 6; then at the end of a year, it would be closer to a z-score of 4.5. We would call it a six sigma process but expect the opportunity defect rate to be 3.4 ppm (DPMO) at the end of the year.

So all the z-bench and modern sigma level wanna-bees really represent the sigma level for a one opportunity product. The true sigma level for a product or service is equivalent to the z-bench probabilty/opportunities.