Statistical Process Control
-
Upload
destiny-weber -
Category
Documents
-
view
87 -
download
9
description
Transcript of Statistical Process Control
4251
0011 0010 1010 1101 0001 0100 1011
To Help Ensure Quality
Quality means fitness for use
- quality of design
- quality of conformance
Quality is inversely proportional to variability.
4251
0011 0010 1010 1101 0001 0100 1011
What is Quality?
• Transcendent definition: excellence– Excellence in every aspect
• PERFORMANCE
How well the output does what it is supposed to do.
• RELIABILITY
The ability of the output (and its provider) to
function as promised
4251
0011 0010 1010 1101 0001 0100 1011
What is Quality• CONVENIENCE and ACCESSIBILITY
How easy it is for a customer to use the product or
service.
• FEATURES
The characteristics of the output that exceed the
output’s basic functions.
• EMPATHY
The demonstration of caring and individual attention to
customers.
4251
0011 0010 1010 1101 0001 0100 1011
What is Quality• CONFORMANCE
The degree to which an output meets specifications or
requirements.
• SERVICEABILITY
How easy it is for you or the customer to fix the output with minimum downtime or cost.
• DURABILITY
How long the output lasts.
4251
0011 0010 1010 1101 0001 0100 1011
What is Quality
• AESTHETICS
How a product looks, feels, tastes, etc.
• CONSISTENCY
The degree to which the performance changes over
time.
• ASSURANCE
The knowledge and courtesy of the employees and
their ability to elicit trust and confidence.
4251
0011 0010 1010 1101 0001 0100 1011
What is Quality
• RESPONSIVENESS
Willingness and ability of employees to help
customers and provide proper services.
• PERCEIVED QUALITY
The relative quality level of the output in the eyes of
the customers.
4251
0011 0010 1010 1101 0001 0100 1011
What is Quality
• Product-based definition: quantities of product attributes– Attributes are non-measurable types of data
• What are all the different features
4251
0011 0010 1010 1101 0001 0100 1011
What is Quality
• User-based definition: fitness for intended use– How well does the product meet or exceed the
expected use as seen by the user
4251
0011 0010 1010 1101 0001 0100 1011
What is Quality
• Value-based definition: quality vs. price– How much is a product or service going to
cost and then how much attention to quality can we afford to spend.
– Cheap product, little quality
4251
0011 0010 1010 1101 0001 0100 1011
What is Quality
• Manufacturing-based definition: conformance to specifications– The product has both variable specifications
(measurable) and attributable specifications (non-measurable) that manufacturing monitors and ensures conformance
4251
0011 0010 1010 1101 0001 0100 1011
What is SPC
• SPC, Statistical Process Control, is a process that was designed in the 1930’s to characterize changes in process variation from a standard.– It can be used for both attributes and variables
4251
0011 0010 1010 1101 0001 0100 1011
• The basic tool used in SPC is the control chart– There are various types of control charts
• Mean chart
• Range chart
• Median chart
• Mean and range chart (X and R)
• c chart
• p chart, etc.
4251
0011 0010 1010 1101 0001 0100 1011
• Control charts– a graphical method for detecting if the underlying
distribution of variation of some measurable characteristic of the product seems to have undergone a shift
– monitor a process in real time– map the output of a production process over time
and signals when a change in the probability distribution generating observations seems to have occurred
– are based on the Central Limit Theory
4251
0011 0010 1010 1101 0001 0100 1011
• Central Limit Theorem says that the distribution of sums of Independent and Identically Distributed (IID) random variables approaches the normal distribution as the number of terms in the sum increases. – Things tend to gather around a center point– As they gather they form a bell shaped curve
4251
0011 0010 1010 1101 0001 0100 1011
• The center of things are described in various ways– Geographical center
– Center of gravity
• In statistics, when we look at groups of numbers, they are centered in three different ways– Mode
– Median
– Mean
4251
0011 0010 1010 1101 0001 0100 1011
Mode
• Mode– Mode is the number that occurs the most
frequently in a group of numbers• 7, 9, 11, 6, 13, 6, 6, 3,11
– Put them in order
– 3, 6, 6, 6, 7, 9, 11, 11, 13
– 3, 6, 6, 6, 7, 9, 11, 11, 13
• The mode is 6
4251
0011 0010 1010 1101 0001 0100 1011
Median
• Median (X) is like the geographical center, it would be the middle number
• 7, 9, 11, 6, 13, 6, 6, 3,11– Put them in order
– 3, 6, 6, 6, 7, 9, 11, 11, 13
– 3, 6, 6, 6, 7, 9, 11, 11, 13
• 7 is the median
~
4251
0011 0010 1010 1101 0001 0100 1011
Mean
• Mean is the average of all the numbers and is designated by the symbol μ for population mean and for sample mean – The mean is derived by adding all the numbers
and then dividing by the quantity of numbers
– X1 + X2 + X3 + X4 + X5 + X6 + X7 +…+Xn
n
_X
4251
0011 0010 1010 1101 0001 0100 1011
Xi
i = 1
n1n Σ_
X =
The sum of …
… all the numbers …
… from the first number …
… to the nth number …
… multiplied by 1 over n
The mean …
… is equal to …
4251
0011 0010 1010 1101 0001 0100 1011
If we had the numbers, 1,2,3,6 and 8, you can see below that they “balance” the scale. The mean
is not geometric center but like the center of gravity
1 2 3 6 8
4251
0011 0010 1010 1101 0001 0100 1011
• As the numbers are accumulated, they are put in order, smallest to largest, and the number or each number can then be put into a graph called a Histogram
4251
0011 0010 1010 1101 0001 0100 1011
• A normal curve is considered “normal” if the following things occur– The shape is symmetrical about the mean– The mean, the mode and the median are the
same
4251
0011 0010 1010 1101 0001 0100 1011
Variation• The numbers that were not exactly on the
mean are considered “variation”– When weighing the candy, the manufacturer is
targeting a specific weight– Those that do not hit the specific weight are
variations.
• Will there always be variation?• There are two types of variation
– Common cause variation– Special cause variation
4251
0011 0010 1010 1101 0001 0100 1011
Common Cause
• Common cause variation is that normal variation that exists in a process when it is running exactly as it should– eg. In the production of that candy
• When the operator is running the machine properly– Within cycle time allotted for each drop of candy
– Candy is properly placed on trays
– Temperatures are where they need to be
– Mixture is correct
4251
0011 0010 1010 1101 0001 0100 1011
• When the machine is running properly– Tooling is sharp and aligned correctly
– All components are properly maintained
– Voltage is correct
– Safety interlocks are properly set
• When the material is correct– Hardness
– Size
– Thickness
– Blend
4251
0011 0010 1010 1101 0001 0100 1011
• When the method is correct– Right tonnage machine
– Proper timing
• When the environment is correct– Ambient temperature
– Ambient humidity
– Dust and dirt
– Corrosives
• When the original measurements are correct– Die opening dimensions
4251
0011 0010 1010 1101 0001 0100 1011
• As we have just reviewed, common cause variation cannot be defined by one particular characteristic– It is the inherent variation of all the parts of the
operation together• Voltage fluctuation
• Looseness or tightness of bearings
4251
0011 0010 1010 1101 0001 0100 1011
• Common cause variation must be optimized and run at a reasonable cost– Don’t spend a dollar to save a penny
4251
0011 0010 1010 1101 0001 0100 1011
Special Cause
• Special cause is when one or more of the process specifications/conditions change– Temperatures– Tools dull– Voltage drops drastically– Material change– Stops move– Bearings are failing
4251
0011 0010 1010 1101 0001 0100 1011
• Special cause variations are the variations that need to be corrected – But how do we know when these problems
begin to happen?
Statistical Process Collection and Control!
4251
0011 0010 1010 1101 0001 0100 1011
• Variation in the process occurs two major ways.– The range changes– The mean changes
4251
0011 0010 1010 1101 0001 0100 1011
• Range is the smallest data point subtracted from the largest data point– It represents the total data spread that has been
sampled– If the range gets smaller, or more significantly
if it gets larger, something has changed in the process.
4251
0011 0010 1010 1101 0001 0100 1011
_X
Normal Process
Normal Range
Changed Process
Changed Range
4251
0011 0010 1010 1101 0001 0100 1011
• Give me some examples that you would think would cause the range to tighten up
• How about loosen up?
4251
0011 0010 1010 1101 0001 0100 1011
• If you remember, the mean is the point around which the data is centered– If the mean changes, then it would mean that
the central point has changed
4251
0011 0010 1010 1101 0001 0100 1011
• Now lets see what the curve would look like if the mean changed
4251
0011 0010 1010 1101 0001 0100 1011
• Most of the time we see both happen to some degree. In the previous example, you may have noticed that the range also became smaller
4251
0011 0010 1010 1101 0001 0100 1011
• It is important that when parts are being sampled from an entire population of production, that they are randomly selected– This is called statistical collection of data
4251
0011 0010 1010 1101 0001 0100 1011
Why do we need Data?
• Assessment– Assessing the effectiveness of specific
techniques or corrective actions
• Evaluation– Determine the quality of a process or product
• Improvement– Help us understand where improvement is
needed
4251
0011 0010 1010 1101 0001 0100 1011
• Control– To help control a process and to ensure it does
not move out of control
• Prediction– Provide information and trends that enables us
to predict when an activity will fail in the future
• Characterization– Help us understand weaknesses and strengths
of products and processes
4251
0011 0010 1010 1101 0001 0100 1011
Deciding on the Data
• Before data is collected, a plan needs to be put in place – If there is no clear objective, then the data is
very likely not going to be useful to anyone– Do not collect data for just because you are
supposed to
4251
0011 0010 1010 1101 0001 0100 1011
Deciding on the Data
• Base the data being collected on the hypothesis of the process to be examined – Know what the expected results should be– Know what parts of the process can give
meaningful data– Understand the process as a whole
4251
0011 0010 1010 1101 0001 0100 1011
Deciding on the Data
• Consider the impact of the data collection process on the whole organization – Data collection can be very expensive and time
consuming– Data collection imparts a “Big Brother is
Watching” feeling on employees– Data collection causes employees to work harder
and more focused than they really would– Data gathering is tedious and requires a
commitment
4251
0011 0010 1010 1101 0001 0100 1011
Deciding on the Data
• Data gathering must have complete, total, 100% support from management – Data gathering inherently takes time, analysis
even more time, and positive results even more time than that.
– Management can be very impatient some times
4251
0011 0010 1010 1101 0001 0100 1011
Guidelines for Useful Data• Data must contain information that allows for
identification of types of errors or changes made• Data MUST include cost to making changes• Data must be compared in some way to
expectation or specification• Benchmarks must be available from historical data
if possible• Data must be clear enough to allow for
repeatability in the future, or on other projects
4251
0011 0010 1010 1101 0001 0100 1011
• Data should cover the following– What, by whom and for what purpose– What are the specifications– Who will gather the data– Who will support the data gathering
• And have they agreed
– How will the data be gathered– How will the data be validated– How will the data be managed
4251
0011 0010 1010 1101 0001 0100 1011
Tips on data collection
• Establish goals and the data gathering and define and be prepared for questions that may occur during gathering
• Involve all the people who are going to be affected by the gathering, analysis and corrective/preventive actions
• Keep goals small to start with, because the data could be almost overwhelming
4251
0011 0010 1010 1101 0001 0100 1011
• Design the data collection form to be simple, so that comparison is easy– Provide training in the process to be followed
• Include any validation criteria (check against known calibrated gauge, etc.) and record results.
• Automate as much as possible
4251
0011 0010 1010 1101 0001 0100 1011
• The data will be collected onto a collection form that will then be transferred to a control chart– In many cases the two forms are the same
• These control charts will help us to decide if the process is running as we want, and hopefully tell us that it is beginning to change so we can avoid producing bad parts
4251
0011 0010 1010 1101 0001 0100 1011
• These charts can be grouped into two major categories– They are grouped based on the type of data
being collected• Attributes data
• Variables data
4251
0011 0010 1010 1101 0001 0100 1011
Attributes
• Attributes data– Non-measurable characteristics
• Can be very subjective
– We must develop specific descriptions for each attribute
• Called operational definitions– Blush– Splay– Scratched– Color– Presence, etc.
4251
0011 0010 1010 1101 0001 0100 1011
Attributes
– Observations are counted• Yes/No
• Present/Absent
• Meets/Doesn’t meet
– Visually inspected• Go/no-go gauges
• Pre-control gauges
– Discreet scale (has limits)
4251
0011 0010 1010 1101 0001 0100 1011
Attributes
• If color happens to be an attribute that is being inspected for– Typically, the expected color sample is given
– Maybe a light and dark sample is given• The acceptable range is in between.
– A reject is not measured, just counted as one
• Scratches might be given by location, length of scratch, depth of scratch, width of scratch, visible at a certain distance, etc.
4251
0011 0010 1010 1101 0001 0100 1011
Attributes
• The description becomes more detailed as the quality becomes more critical.– It is very important that inspection is not used
to sort out chronic problems– Inspection is used to collect data
• Inspection should be as close to the process as possible and by the operators
4251
0011 0010 1010 1101 0001 0100 1011
Attributes
• On an attributes chart, the characteristic name is used– Scratched, wrong color, specks, bubbles, etc.– Every time an unacceptable characteristic is
found, it is marked on the data collection sheet using a method such as counting marks ( IIII )
4251
0011 0010 1010 1101 0001 0100 1011
Variables
• Developed through measuring– Is very objective– Can be temperature, length, width, weight, force,
volts, amps, etc.
• Uses a measuring tool– Scale– Meter
• Scale increments are specified by design engineers, customer, etc.
4251
0011 0010 1010 1101 0001 0100 1011
Variables
– As with attributes, variables inspection should be done as close to process as possible, preferably by the operator.
• Inspection should not be done to sort but for data collection and correction of the process
• This will allow for quick response and rapid correction, minimizing defect quantities
4251
0011 0010 1010 1101 0001 0100 1011
Variables
• Important considerations for data collection– The measuring tools must be accurate and
precise• Accurate means that it is can produce similar
measurements as a standard
• Precise means repeatability
4251
0011 0010 1010 1101 0001 0100 1011
Variables
– Sampling program used• Sampling involves removing a representative
quantity of components from the normal production and inspecting them
• The sample size and frequency is very important in remaining confident that any negative process changes are being uncovered.
• Sampling is also dependent on the type of charting you will be doing, and will be discussed with each chart type.
4251
0011 0010 1010 1101 0001 0100 1011
Variables
• After the variables are being collected, they can then be visualized as a Normal Curve– The mean can be calculated and another
characteristic of the normal curve, the standard deviation, can also be calculated
– There are many low priced and free programs to allow input and automatic calculation
4251
0011 0010 1010 1101 0001 0100 1011
Standard Deviation of Variables
• Standard deviation is characteristic of all normal curves– One standard deviation on each side of the mean would
represent 68.26% of the area beneath the curve (or 68.26% of all data)
– Two standard deviations on each side of the mean would represent 95.44% of the area beneath the curve
– Three standard deviations on each side of the mean would represent 99.74% of the area.
– Six standard deviations on each side of the mean would represent 99.9999966% of the area (3.5 defects per million).
4251
0011 0010 1010 1101 0001 0100 1011
_X
99.74%
.240 .242 .244 .246 .248 .250 .252 .254 .256 .258 .260
-σ σ-2σ 2σ 3σ-3σ 95.44%
68.26%
4251
0011 0010 1010 1101 0001 0100 1011
0-2 2-3 -1 31.240 .242 .244 .246 .248 .250 .252 .254 .256 .258 .260
-σ σ-2σ 2σ 3σ-3σ
4251
0011 0010 1010 1101 0001 0100 1011
Standard Deviation
• Whenever we talk about standard deviation, there are two types– Population standard deviation– Sample standard deviation
4251
0011 0010 1010 1101 0001 0100 1011
Population Standard Deviation
• The population is considered the greater lot that the sample is taken from– A shipment– A days production– A shifts production
• The symbol for the population standard deviation is the Greek letter sigma (σ)
4251
0011 0010 1010 1101 0001 0100 1011
Sample Standard Deviation
• The sample standard deviation is taken from the specific sample data, which is considerably smaller than the sample that would be taken if sampling the entire population
• The symbol for the sample standard deviation is the lower case “s”
4251
0011 0010 1010 1101 0001 0100 1011
• The formula for the standard deviation is
σ = √ Σ (X – μ )2 p - 1
s = √ Σ (X – X )2 n - 1
4251
0011 0010 1010 1101 0001 0100 1011
• Vilfredo Pareto– Italy’s wealth
• 80% held by 20% of people
• Used when analyzing attributes– Based on results of tally numbers in specific
categories
4251
0011 0010 1010 1101 0001 0100 1011
• What is a Pareto Chart used for?– To display the relative importance of data – To direct efforts to the biggest improvement
opportunity by highlighting the vital few in contrast to the useful many
4251
0011 0010 1010 1101 0001 0100 1011
Constructing a Pareto Chart
• Determine the categories and the units for comparison of the data, such as frequency, cost, or time.
• Total the raw data in each category, then determine the grand total by adding the totals of each category.
• Re-order the categories from largest to smallest. • Determine the cumulative percent of each category
(i.e., the sum of each category plus all categories that precede it in the rank order, divided by the grand total and multiplied by 100).
4251
0011 0010 1010 1101 0001 0100 1011
Constructing a Pareto Chart
• Draw and label the left-hand vertical axis with the unit of comparison, such as frequency, cost or time.
• Draw and label the horizontal axis with the categories. List from left to right in rank order.
• Draw and label the right-hand vertical axis from 0 to 100 percent. The 100 percent should line up with the grand total on the left-hand vertical axis.
• Beginning with the largest category, draw in bars for each category representing the total for that category.
4251
0011 0010 1010 1101 0001 0100 1011
Constructing a Pareto Chart
• Draw a line graph beginning at the right-hand corner of the first bar to represent the cumulative percent for each category as measured on the right-hand axis.
• Analyze the chart. Usually the top 20% of the categories will comprise roughly 80% of the cumulative total.
4251
0011 0010 1010 1101 0001 0100 1011
• Lets assume we are listing all the rejected products that are removed from a candy manufacturing line in one week– First we put the rejects in specific categories
• No wrapper• No center• Wrong shape• Short shot• Wrapper open• Underweight• Overweight
4251
0011 0010 1010 1101 0001 0100 1011
– Then we tally how many of each category we have• No wrapper - 10
• No center - 37
• Wrong shape - 53
• Short shot - 6
• Wrapper open - 132
• Underweight - 4
• Overweight – 17
– Get the total rejects - 259
4251
0011 0010 1010 1101 0001 0100 1011
– Develop a percentage for each category• No wrapper – 10/259 = 3.9%
• No center – 37/259 = 14.3%
• Wrong shape – 53/259 = 20.5%
• Short shot – 6/259 = 2.3%
• Wrapper open – 132/259 = 51%
• Underweight – 4/259 = 1.5%
• Overweight – 17/259 = 6.6%
4251
0011 0010 1010 1101 0001 0100 1011
• Now place the counts in a histogram, largest to smallest
0
10
20
30
40
50
60
WrapperOpen
No Center No Wrapper Underweight
70
80
90
100
51%
20.5%14.3%
6.6%3.9%
2.3% 1.5%
4251
0011 0010 1010 1101 0001 0100 1011
• Finally, add up each and plot as a line diagram
0
10
20
30
40
50
60
WrapperOpen
No Center No Wrapper Underweight
70
80
90
100
71.5%
51%
20.5%
14.3%
85.8%
6.6%
92.4%
3.9%
96.3%
2.3%
98.6%
1.5%
100.1%
4251
0011 0010 1010 1101 0001 0100 1011
0
10
20
30
40
50
60
WrapperOpen
No Center No Wrapper Underweight
70
80
90
100
71.5%
51%
20.5%
14.3%
85.8%
6.6%
92.4%
3.9%
96.3%
2.3%
98.6%
1.5%
100.1%
4251
0011 0010 1010 1101 0001 0100 1011
• After the problem is corrected, continue the data collection
0
5
10
15
20
25
30
35
40
45
Wrong Shape No Center Overweight No Wrapper Short Shot Underweight
50
55
60
65
41.7%
29.1%
13.4%
7.8%4.7% 3.1%
4251
0011 0010 1010 1101 0001 0100 1011
• Sometimes other information would be better – Use the scrap cost of each rejected part– Use rework cost of each rejected part
• This can be especially useful if the rejects are all at about the same quantity
4251
0011 0010 1010 1101 0001 0100 1011
Here are some tips• Create before and after comparisons of Pareto
charts to show impact of improvement efforts.• Construct Pareto charts using different
measurement scales, frequency, cost or time.• Pareto charts are useful displays of data for
presentations.• Use objective data to perform Pareto analysis
rather than team members opinions.
4251
0011 0010 1010 1101 0001 0100 1011
• If there is no clear distinction between the categories -- if all bars are roughly the same height or half of the categories are required to account for 60 percent of the effect -- consider organizing the data in a different manner and repeating Pareto analysis.
• Pareto analysis is most effective when the problem at hand is defined in terms of shrinking the product variance to a customer target. For example, reducing defects or elimination the non-value added time in a process.
4251
0011 0010 1010 1101 0001 0100 1011
• The accumulative curve can also be removed, especially if there is not distinct shift in the slope of the curve.
4251
0011 0010 1010 1101 0001 0100 1011
X and R
• When are they used– When you need to assess variability– When the data can be collected on an ongoing
basis– When it can be collected over time– When we are using variables– Subgroups must be more than 1
4251
0011 0010 1010 1101 0001 0100 1011
• How is it made– First, complete the header information
• This is important so that each collection can be properly understood and separated from others.
– Record the data• Not just data but significant observations.
4251
0011 0010 1010 1101 0001 0100 1011
– Calculate the mean of each subgroup of information
– Calculate the range for each subgroup
4251
0011 0010 1010 1101 0001 0100 1011
• When the first chart is finished, calculate the grand average, X– This is the average of the averages
• Calculate the average of the ranges
4251
0011 0010 1010 1101 0001 0100 1011
• Calculate the control limits– The formula is
– UCLx = X + ( A2 x R )
– LCLx = X – (A2 x R )
4251
0011 0010 1010 1101 0001 0100 1011
Weighting Factors
Subgroup Size A2 D3 D4
2 1.880 0 3.267
3 1.023 0 2.574
4 0.729 0 2.282
5 0.577 0 2.114
4251
0011 0010 1010 1101 0001 0100 1011
• Scale the charts– Use a scale that will ensure the numbers will all
fit in the chart and also that the new numbers will also fit, even outside the control limits
4251
0011 0010 1010 1101 0001 0100 1011
• Find the largest X value and compare to the UCL. Use the larger
• Find the smallest X value and compare to the LCL. Use the smaller
• Subtract the smaller from the larger and write down the difference
• Divide the difference by 2/3 the number of lines on the chart (30 lines on this chart) and round upward if needed.
4251
0011 0010 1010 1101 0001 0100 1011
Control Chart Interpretation
• Interpret the Data– Any point lying outside the control limits
– Seven points in a row above or below the average
– Seven points in a row going in one direction
– Any non-random patterns• Should look like the normal curve
– Too close to average
– Too far from average
– Cycling
4251
0011 0010 1010 1101 0001 0100 1011
• Declare whether in control or out of control
• Respond to the information
4251
0011 0010 1010 1101 0001 0100 1011
PreControl
• PreControl is a form of X bar and R chart… without the chart.– It is to be used only when a process has been
proven to be in control– It is to be used to stop production before bad
parts are produced
4251
0011 0010 1010 1101 0001 0100 1011
• A definitive variable must be selected that will tell if the process is moving out of control– ie. weight, length, warp, etc.
• A PreControl gauge must be made to measure the variable– Dial indicator, weigh scale, taper gauge, etc.
4251
0011 0010 1010 1101 0001 0100 1011
• The total specified range that is acceptable is to be calculated for standard deviation, and plus or minus two standard deviations are the control limits for the PreControl gauge– The third standard deviation is red– The second standard deviation is yellow– The first standard deviation is green
4251
0011 0010 1010 1101 0001 0100 1011
• Rules of PreControl– At start up, after the process has been set to
know parameters that produces good parts• Collect 5 samples in a row from each cavity, mold,
core, station, whatever.
• Each are checked to the definitive PreControl gauge
• All five of each must be in the green– If not, the process must be brought under control
4251
0011 0010 1010 1101 0001 0100 1011
• Once all five are in the green the process is started up
• At selected intervals (typically one an hour or once every half hour) two samples in a row are selected from each cavity, etc.
4251
0011 0010 1010 1101 0001 0100 1011
– If the both are green, process is ok– If the first is green, but the second is yellow, ok
but be alert– If both are yellow, stop the process and go back
to start up– If either is red, stop the process and go back to
start up
4251
0011 0010 1010 1101 0001 0100 1011
• We can determine how far data that is being gathered is out of specification– Let us look at some make believe data
4251
0011 0010 1010 1101 0001 0100 1011
• We are investigating the time it will take for a process to pick up a part, move it to another area and place it on a conveyor– It is unacceptable if it takes more than 14
seconds
4251
0011 0010 1010 1101 0001 0100 1011
• Data collection has given us the following information– The grand mean is calculated to be 10.00
seconds– Our sample size has been 5 observations each
hour– Our calculated average range is 4.653
4251
0011 0010 1010 1101 0001 0100 1011
Estimated Standard Deviation
• The formula to find the estimated standard deviation is as follows
• This standard deviation is calculated to one more decimal point than the original data
σ = R/d2^
_
4251
0011 0010 1010 1101 0001 0100 1011
• Now we know the grand mean, and we know the estimated standard deviation– From this we can calculate the location of the
“tails” of the distribution curve– Add three of the estimated standard deviations
to the grand mean for the upper tail– Subtract three of the estimated standard
deviations to the grand mean for the lower tail
4251
0011 0010 1010 1101 0001 0100 1011
• Now, let’s add the upper and lower specification limits to the curve– We know that the upper limit is 14 seconds– We can assume the lower limit is 0 seconds
4251
0011 0010 1010 1101 0001 0100 1011
• Now we can see that some of the data is outside the limits that have been set, but how much?
4251
0011 0010 1010 1101 0001 0100 1011
Z scores
• We are going to analyze the data and determine how badly we are out of specification.– Z scores will help us to determine that
4251
0011 0010 1010 1101 0001 0100 1011
• Zupper = 14 – 10.00 / 2.00 = 2.00
• Now we will look at the following table to see what percentage that is
• We see that the Z score is .0228, which when changed to a percentage is 2.28%
4251
0011 0010 1010 1101 0001 0100 1011
• Zlower = 10.00 - 0 / 2.00 = 5.00
• Now we will look at the following table to see what percentage that is
• We see that the Z score is so small that it is insignifican, or it is 0%
4251
0011 0010 1010 1101 0001 0100 1011
• Total percent outside the limits is– 2.28% + 0% = 2.28%
• From this information, we can be pretty sure that if we look at this process continually, we will probably find that 2.28% of the time, we will be over the 14 seconds we are limiting the process to.
4251
0011 0010 1010 1101 0001 0100 1011
• Now lets see if the process itself can provide the limited time allowed– We are going to look at the width of the
calculated spread vs. the specified spread– The allowed spread is the difference between
the lower specification limit and the upper specification limit
4251
0011 0010 1010 1101 0001 0100 1011
• USL – LSL = 14
• Now we will take that specification spread and divide it by six times the estimated standard deviation– Why six times the standard deviation
• 6 x 2.00 = 12
• 14 / 12 = 1.17
4251
0011 0010 1010 1101 0001 0100 1011
• From the previous slide, it is obvious that we want to make sure that the actual spread is less than the specified spread– A Cp of more than one is desirable, and the
higher the better
4251
0011 0010 1010 1101 0001 0100 1011
• The only problem with this is we can have a Cp of 2, but still be outside the limits of the specification
• So there is another calculation that will measure this. It is called Cpk
4251
0011 0010 1010 1101 0001 0100 1011
• Cpk is simple. It is the smallest of the Z scores, or Zmin, divided by 3
• In our case, that would be 2.00 / 3 = .67
• As in the Cp, we would want the result to be greater than 1.
4251
0011 0010 1010 1101 0001 0100 1011
• So in our example we had a Z score of 2.00, which told us that 2.28% of the time we were more than 14 seconds
• We also know that the process spread was 1.17, which means we can maintain the proper time
• But we also found that the Cpk was .67, which tells us we are outside of the limits