Quality tools

94
Quality Tools and Techniques Dafni C. Carreon FT-54

Transcript of Quality tools

Page 1: Quality tools

Quality Tools and

Techniques

Dafni C. CarreonFT-54

Page 2: Quality tools

OUTLINE

Process Variation Sampling MethodsVerification and Validation Risk management tools Six Levels of Cognition

based on Bloom's Taxonomy

Page 3: Quality tools

Process

Variation

Dafni C. CarreonFT-54

Page 4: Quality tools

Process Variation

Process variation is the main course of quality problems, whether inbusiness (transactional) or production processes.

Inevitable change in the output or result of a system (process)because all systems vary over time. Two major types of variationsare (1) Common, which is inherent in a system, and (2) Special,which is caused by changes in the circumstances or environment.48

Page 5: Quality tools

Process Variation

Causes of process variation

A result different than the range expected for a process. Process Variation may be causes by a wide variety of factors including:

resource variation human (e.g. setup employees did not set fill rate correctly) wear and tear (equipment is slightly worn out) Information system (e.g. did not translate targeted fill rate correctly) line speed temperature new process new equipment new workers new materials

16

Page 6: Quality tools

Process Variation

1. Common And Special Causes

Are the two distinct origins of variation in a process, as defined in the statistical thinking andmethods of Walter A. Shewhart and W. Edwards Deming. Briefly, "common causes", alsocalled Natural patterns, are the usual, historical, quantifiable variation in a system, while"special causes" are unusual, not previously observed, non-quantifiable variation.

17

Page 7: Quality tools

Process Variation

Common-cause variation Common cause variation is fluctuation caused by unknown factors resulting in a steady

but random distribution of output around the average of the data. It is a measure of theprocess potential, or how well the process can perform when special cause variationremoved.

Common cause variation is a measure of the process’s potential, or how well the processcan perform when special cause variation is removed. Therefore, it is a measure of theprocess technology. Common cause variation is also called random variation, noise,noncontrollable variation, within-group variation, or inherent variation.

Common-cause variation is characterised by: Phenomena constantly active within the system; Variation predictable probabilistically; Irregular variation within an historicalexperience base; and Lack of significance in individual high orlow values.

18

Page 8: Quality tools

Process Variation

Walter A. Shewhart originally used the term chance cause. The term common cause wascoined by Harry Alpert in 1947. The Western Electric Company used the term naturalpattern. Shewhart called a process that features only common-cause variation as being instatistical control. This term is deprecated by some modern statisticians who prefer thephrase stable and predictable.

Common cause variation is the remaining variation after removing the special causes (non-normal causes) due to one or more of the 5Ms and an “E” causes (Manpower, Material,Method, Measurement, Machine, and Environment), also known as 6Ms (Man power,Mother nature, Materials, Method, Measurements or Machine).

19

20

18

Page 9: Quality tools

Process Variation

Examples of Common causes

Inappropriate procedures Poor design Poor maintenance of machines Lack of clearly defined standard operating

procedures Poor working conditions, e.g. lighting, noise,

dirt, temperature, ventilation

Substandard raw materials Measurement error Quality control error Vibration in industrial processes Ambient temperature and humidity Normal wear and tear Variability in settings Computer response time

Page 10: Quality tools

Process Variation

Special Cause Variation The result of unpredictable errors. For example, a new admitter without proper training is

put on the midnight shift of a busy inner city emergency room. Clearly the number ofadmitting errors is going to be very high until she obtains more training, coaching, andexperience. How many actual errors she will make is highly unpredictable. In this situation,the root problem is not the process but one of the admitters. This chart will help to clearlydistinguish between special cause and common cause variation.

Special-cause variation always arrives as a surprise. It is the signal within a system.

Walter A. Shewhart originally used the term assignable cause. The term special-cause wascoined by W. Edwards Deming. The Western Electric Company used the term unnaturalpattern.

21

20

22

Page 11: Quality tools

Process Variation

Examples Of Special causes

Poor adjustment of equipment Operator falls asleep Faulty controllers Machine malfunction Fall of ground Computer crash Poor batch of raw material Power surges

High healthcare demand from elderly people

Broken part Abnormal traffic (click fraud) on web ads Extremely long lab testing turnover time

due to switching to a new computer system Operator absent

Page 12: Quality tools

Process Variation

2. Process performance metrics

A performance metric is that which determines an organization's behaviour andperformance. Performance metrics measure an organization's activities and performance.It should support a range of stakeholder needs from costumer, shareholders to employees.

In project management, performance metrics are used to assess the health of the projectand consist of the measuring of seven criteria: safety, time, cost, resources, scope, quality,and actions.

There are a variety of ways in which organizations may react to results. This may be totrigger specific activity relating to performance (i.e., an improvement plan) or to use thedata merely for statistical information. Often closely tied in with outputs, performancemetrics should usually encourage improvement, effectiveness and appropriate levels ofcontrol.

Performance metrics are often linked in with corporate strategy and are often derived inorder to measure performance against a critical success factor.

23

24

25

26

Page 13: Quality tools

Process Variation

Performance Metric Description

1. Percentage DefectiveWhat percentage of parts contain one or more

defects?

2. Parts per Million (PPM)

What is the average number of defective parts per million? This is the same figure in metric 1 above of “percentage defective” multiplied by

1,000,000.

3. Defects per Unit (DPU)What is the average number of defects per unit?

4.Defects per Opportunity (DPO)

What is the average number of defects per opportunity? (where opportunity = number of

different ways a defect can occur in a single part

Here is a list of the Performance Metrics which are spelled out and then given an acronym if one is commonly used. The description is given of what this metric means.

27

Page 14: Quality tools

Process Variation

5.Defects per million

Opportunities (DPMO)

The same figure in metric 3 above of defects per opportunity multiplied by 1,000,000

6.Rolled throughput yield

(RTY)

The yield stated as a percentage of the number of parts that go through a multi-stage process

without a defect.

7. Process sigma

The sigma level associated with either the DPMO or PPM level found in metric 2 or 5

above.

8. Cost of poor qualityThe cost of defects: either internal

(rework/scrap) or external (warranty/product)

27

Page 15: Quality tools

Process Variation

Performance metrics–Discussion and examples 1. Percentage DefectiveThis is defined as the:(Total number of defective parts)/(Total number of parts) X 100So if there are 1,000 parts and 10 of those are defective, the percentage of defective parts is (10/1000) X 100 = 1%

2. PPMSame as the ratio defined in metric 1, but multiplied by 1,000,000. For the example given above, 1 out of 100 parts are defective means that 10,000 out of 1,000,000 will be defective so the PPM = 10,000.NOTE: The PPM only tells you whether or not there exists one or more defects. To get a clear picture on how many defects there are (since each unit can have multiple defects), you need to go to metrics 3, 4, and 5.

27

Page 16: Quality tools

Process Variation

Defects 0 1 2 3 4 5

# of Units 70 20 5 4 9 1

3. Defects per Unit

Here the AVERAGE number of defects per unit is calculated, which means you have to

categorize the units into how many defects they have from 0, 1, 2, up to the maximum number.

Take the following chart, which shows how many units out of 100 total have 0, 1, 2, etc., defects

all the way to the maximum of 5.

The average number of defects is DPU = [Sum of all (D * U)]/100 =

[(0 * 70) + (1 * 20) + (2 * 5) + (3 * 4) + (4 * 9) + (5 * 1)]/100 = 47/100 = 0.47

4. Defects per Opportunity How many ways are there for a defect to occur in a unit? This is called a defect “opportunity”,

which is akin to a “failure mode”. Let’s take the previous example in metric 3. Assume thateach unit can have a defect occur in one of 6 possible ways. Then the number ofopportunities for a defect in each unit is 6.

Then DPO = DPU/O = 0.47/6 = 0.078333

Page 17: Quality tools

Process Variation

5. Defects per Million Opportunities This is EXACTLY analogous to the difference between the Percentage Defective and the PPM,

metrics 1 and 2, in that you get this by taking metric 4, the Defects per Opportunity, and multiplying by 1,000,000. So using the above example in metric 3:

DPMO = DPO * 1,000,000 = 0.078333 * 1,000,000 = 78,333

6. Rolled through Yield This takes the percentage of units that pass through several subprocesses of an entire

process without a defect. The number of units without a defect is equal to the number of units that enter a process

minus the number of defective units. Let the number of units that enter a process be P. The number of defective units is D. Then the first-pass yield for each subprocess or FPY is equal to (P – D)/P. One you get each FPY for each subprocess, you multiply them altogether.

If the yields of 4 subprocesses are 0.994, 0.987, 0.951 and 0.990, then the

RTY = (0.994)(0.987)(0.951)(0.990) = 0.924 or 92.4%.

Page 18: Quality tools

Process Variation

7. Process Sigma What is a Six Sigma process? It is the output of process that has a mean of 0 and standard

deviation of 1, with an upper specification limit (USL) and lower specification limit (LSL) set at +3 and -3, respectively. However, there is also the matter of the 1.5-sigma shift which occurs over the long term.

8. Cost of poor quality Also known as the cost of nonconformance, this takes the cost it takes to take care of

defects either a) internally, i.e., before they leave the company, through scrapping, repairing, or

reworking the parts, or b) externally, i.e., after they leave the company, through costs of warranty, returned

merchandise, or product liability claims and lawsuits.

This is obviously more difficult to calculate because the external costs can be delayed by months or even years after the products are sold. It’s best, therefore, to measure those costs which are relatively easy to calculate and quickly available, i.e., the internal costs of poor quality.

Page 19: Quality tools

Process Variation

Cp and Cpk process Cp, and Cpk as statistical measures of process quality capability. Some segments in

manufacturing have specified minimal requirements for these parameters, even for someof their key documents, such as advanced product quality planning and ISO/TS-16949.

Cp and Cpk are calculated when the process is not stable, yet one desires to estimate howgood the process might be if no Special Causes existed.

Cpk use a “best estimate” of the true process standard deviation (sigma-hat). Special Causesare excluded from the data when appropriate, to estimate the “potential” natural processvariation. A theoretical process sigma-hat is calculated and Cp / Cpk estimated.

Cp= Process Capability. A simple and straightforward indicator of process capability. Cpk= Process Capability Index. Adjustment of Cp for the effect of non-centered distribution.

28

29

Page 20: Quality tools

Process Variation

Cp This is a process capability index that indicates the process’ potential performance byrelating the natural process spread to the specification (tolerance) spread. It is often usedduring the product design phase and pilot production phase.

Cp= 𝑆𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑎𝑡𝑖𝑜𝑛𝑅𝑎𝑛𝑔𝑒

6𝑠= (𝑈𝑆𝐿 − 𝐿𝑆𝐿)

6𝑠

Where USL is the Upper Specification Limit and LSL is the Lower Specification Limit.

When calculating Cp the evaluation considers only the quantity of process variation related to the specification limit ranges. This method, besides being applicable only in processes with upper and lower specification limits, does not provide information about process centralization.

30

Page 21: Quality tools

Process Variation

Cpk (2-Sided Specification Limits) This is a process capability index that indicates the process actual performance by

accounting for a shift in the mean of the process toward either the upper or lowerspecification limit. It is often used during the pilot production phase and during routineproduction phase.

Cpku = Cpk (Upper Specification Limit)Cpkl = Cpk (Lower Specification Limit)

30

Page 22: Quality tools

Process Variation

Outlier Outlier is an observation point that is distant from other observations. An outlier may be

due to variability in the measurement or it may indicate experimental error; the latter are sometimes excluded from the data set.

Although definitions vary, an outlier is generally considered to be a data point that is faroutside the norm for a variable or population (e.g., Jarrell, 1994; Rasmussen, 1988; Stevens,1984). Hawkins described an outlier as an observation that “deviates so much from otherobservations as to arouse suspicions that it was generated by a different mechanism”(Hawkins, 1980, p.1). Outliers have also been defined as values that are “dubious in the eyesof the researcher”(Dixon, 1950, p. 488) and contaminants (Wainer, 1976).

Outliers can arise from several different mechanisms or causes. Anscombe (1960) sortsoutliers into two major categories: those arising from errors in the data, and those arisingfrom the inherent variability of the data. Not all outliers are illegitimate contaminants,and not all illegitimate scores show up as outliers (Barnett & Lewis, 1994). It is thereforeimportant to consider the range of causes that may be responsible for outliers in a givendata set. What should be done about an outlying data point is at least partly a function ofthe inferred cause.

31

32

Page 23: Quality tools

Process Variation

Outliers from data errors. Outliers are often caused by human error, such as errors in datacollection, recording, or entry. Data from an interview can be recorded incorrectly, ormiskeyed upon data entry.

Outliers from intentional or motivated mis-reporting. There are times when participantspurposefully report incorrect data to experimenters or surveyers.

Outliers from sampling error. Another cause of outliers or fringeliers is sampling. It ispossible that a few members of a sample were inadvertently drawn from a differentpopulation than the rest of the sample.

Outliers from standardization failure. Outliers can be caused by research methodology,particularly if something anomalous happened during a particular subject’s experience.

Outliers from faulty distributional assumptions. Incorrect assumptions about thedistribution of the data can also lead to the presence of suspected outliers (e.g., Iglewicz &Hoaglin, 1993).

Page 24: Quality tools

Process Variation

Outliers as legitimate cases sampled from the correct population. Finally, it is possible thatan outlier can come from the population being sampled legitimately through randomchance. It is important to note that sample size plays a role in the probability of outlyingvalues. Within a normally distributed population, it is more probable that a given data pointwill be drawn from the most densely concentrated area of the distribution, rather than oneof the tails (Evans, 1999; Sachs, 1982). As a researcher casts a wider net and the data setbecomes larger, the more the sample resembles the population from which it was drawn,and thus the likelihood of outlying values becomes greater.

Outliers as potential focus of inquiry. We all know that interesting research is often as mucha matter of serendipity as planning and inspiration. Outliers can represent a nuisance, error,or legitimate data.33

Page 25: Quality tools

Process Variation

Impact of Outliers on DistributionsOutliers are isolated extreme high or low values. If they exist, the distribution is skewed in the direction of the outlier(s).

A. How to identify outliers:a. Outside 2 standard deviationsb. Outside 3 standard deviationsc. Outside 99th %d. Depends on the study, and the variable

B. Outlier Affect on Central Tendency1. Has little impact on mode, median

2. Big impact on mean:Extremely high values pull the mean up.Extremely low values pull the mean down.Ex. AgeAge 99 pulls mean up to 60Age 10 pulls mean down to 19

3. In a normally distributed variable, there are no extreme outliers.

C. Outlier Affect on Dispersion:1. Big impact on range, variance, and standard deviation.2. Remove/transform them before calculating standard deviation.

34

Page 26: Quality tools

Sampling

Methods

Page 27: Quality tools

Sampling Methods

1. Acceptance sampling plans Acceptance sampling uses statistical sampling to determine whether to accept or reject a

production lot of material. It has been a common quality control technique used in industry.It is usually done as products leave the factory, or in some cases even within the factory.Most often a producer supplies a consumer a number of items and a decision to accept orreject the lot is made by determining the number of defective items in a sample from thelot. The lot is accepted if the number of defects falls below where the acceptance numberor otherwise the lot is rejected.

Sample plans are used to protect against irregular degradation of levels of quality insubmitted lots below that considered permissible by the consumer. It will also protect theproducer in the sense that lots produced at permissible levels of quality will have a goodchance to be accepted by the plan.

35

Page 28: Quality tools

Sampling Methods

Types of acceptance sampling plansSampling plans can be categorized across several dimensions:

Sampling by attributes vs. sampling by variables: When the item inspection leads to abinary result (either the item is conforming or nonconforming) or the number ofnonconformities in an item are counted, then we are dealing with sampling byattributes. If the item inspection leads to a continuous measurement, then we aresampling by variables.

Incoming vs. outgoing inspection: If the batches are inspected before the product isshipped to the consumer, it is called outgoing inspection. If the inspection is done bythe consumer, after they were received from the supplier, it is called incominginspection.

Rectifying vs. non-rectifying sampling plans: Determines what is donewith nonconforming items that were found during the inspection. When the cost ofreplacing faulty items with new ones, or reworking them is accounted for, the samplingplan is rectifying. 36

Page 29: Quality tools

Sampling Methods

Single, double, and multiple sampling plans: The sampling procedure may consist of drawinga single sample, or it may be done in two or more steps. A double sampling procedure meansthat if the sample taken from the batch is not informative enough, another sample is taken.In multiple sampling, additional samples can be drawn after the second sample.

36

37

Page 30: Quality tools

Sampling Methods

36

Page 31: Quality tools

Sampling Methods

36

Page 32: Quality tools

Sampling Methods

36

Page 33: Quality tools

Sampling Methods

36

Page 34: Quality tools

Sampling Methods

36

Page 35: Quality tools

Sampling Methods

36

Page 36: Quality tools

Sampling Methods

2. Types of sampling A sample is “a smaller (but hopefully representative) collection of units from a

population used to determine truths about that population” (Field, 2005)

3 factors that influence sample representative-ness

Sampling procedure

Sample size

Participation (response)

When might you sample the entire population?

When your population is very small

When you have extensive resources

When you don’t expect a very high response

38

Page 37: Quality tools

Sampling Methods

38

Page 38: Quality tools

Sampling Methods

Random sampling is the purest form of probability sampling. Each member of the population has an equal

and known chance of being selected. When there are very large populations, it is oftendifficult or impossible to identify every member of the population, so the pool of availablesubjects becomes biased.

Disadvantages If sampling frame large, this method impracticable. Minority subgroups of interest in population may not be present in sample in

sufficient numbers for study.

39

38

Page 39: Quality tools

Sampling Methods

Systematic sampling is often used instead of random sampling. It is also called an Nth name selection

technique. After the required sample size has been calculated, every Nth record isselected from a list of population members. As long as the list does not contain anyhidden order, this sampling method is as good as the random sampling method. Its onlyadvantage over the random sampling technique is simplicity. Systematic sampling isfrequently used to select a specified number of records from a computer file.39

Page 40: Quality tools

Sampling Methods

ADVANTAGES: Sample easy to select Suitable sampling frame can be identified easily Sample evenly spread over entire reference population

DISADVANTAGES: Sample may be biased if hidden periodicity in population coincides with

that of selection. Difficult to assess precision of estimate from one survey.

36

Page 41: Quality tools

Sampling Methods

Stratified sampling is commonly used probability method that is superior to random sampling because it

reduces sampling error. A stratum is a subset of the population that share at least onecommon characteristic. Examples of stratums might be males and females, or managersand non-managers. The researcher first identifies the relevant stratums and their actualrepresentation in the population. Random sampling is then used to selecta sufficient number of subjects from each stratum. "Sufficient" refers to a sample size largeenough for us to be reasonably confident that the stratum represents the population.Stratified sampling is often used when one or more of the stratums in the populationhave a low incidence relative to the other stratums. 39

Page 42: Quality tools

Sampling Methods

Disadvantages: First, sampling frame of entire population has to be prepared separately for each

stratum

Second, when examining multiple criteria, stratifying variables may be related to some, but not to others, further complicating the design, and potentially reducing the utility of the strata.

Finally, in some cases (such as designs with a large number of strata, or those with a specified minimum sample size per group), stratified sampling can potentially require a larger sample than would other methods

38

Page 43: Quality tools

Sampling Methods

Cluster SamplingCluster sampling is an example of 'two-stage sampling' . First stage a sample of areas is chosen; Second stage a sample of respondents within those areas is selected.

Population divided into clusters of homogeneous units, usually based on geographical contiguity.

Sampling units are groups rather than individuals. A sample of such clusters is then selected. All units from the selected clusters are studied.

Advantages : Cuts down on the cost of preparing a sampling frame. This can reduce travel and other administrative costs.Disadvantages: sampling error is higher for a simple random sample of same size. 38

Page 44: Quality tools

Sampling Methods

Difference Between Strata and Clusters

Although strata and clusters are both non-overlapping subsets of the population, theydiffer in several ways.

All strata are represented in the sample; but only a subset of clusters are in the sample.

With stratified sampling, the best survey results occur when elements within strata areinternally homogeneous. However, with cluster sampling, the best results occur whenelements within clusters are internally heterogeneous

38

Page 45: Quality tools

Statistical Vs. Non-statistical Sampling

Sampling Methods

40

Page 46: Quality tools

3. Sampling Terms:

1. Consumer risk is the probability that a product will be manufactured that is defective andshipped to the customer. A person with a customer-only focus will typically want to have avery small consumer risk. A person with a producer-only focus typically is not veryconcerned with consumer risk. Low consumer risk can sometimes be accomplished byrigorous testing and quality control, which, when carried to an extreme in order to reachzero consumer risk, can lead to very expensive products.

2. Producer risk is the probability that a product will be manufactured that is good, but is rejected by the manufacturer's internal quality control processes before it is shipped to the customer. A person with a producer-only focus will typically want to have a very small producer risk. A person with a consumer-only focus typically is not very concerned with producer risk. Low producer risk can be accomplished by lax testing and quality control, which, when carried to an extreme in order to reach zero producer risk, can lead to very poorly-performing or non-yielding products.

The key to high yielding and reliable products is in achieving a balance between these two sometimes-competing goals.

Sampling Methods

41

Page 47: Quality tools

Sampling Methods

3. Target Population is the entire group a researcher is interested in; the group about which the researcher wishes to draw conclusions.

4. Independent Sampling are those samples selected from the same population, or different populations, which have no effect on one another. That is, no correlation exists between the samples.

6. Bias is a term which refers to how far the average statistic lies from the parameter it is estimating, that is, the error which arises when estimating a quantity. Errors from chance will cancel each other out in the long run, those from bias will not.

7. confidence level refers to the percentage of all possible samples that can be expected to include the true population parameter. For example, suppose all possible samples were selected from the same population, and a confidence interval were computed for each sample. A 95% confidence level implies that 95% of the confidence intervals would include the true population parameter.

Page 48: Quality tools

Change Control andConfiguration Management

Page 49: Quality tools

Change Control and Configuration Management

CHANGE CONTROL Change control within quality management systems (QMS) and information technology

(IT) systems is a formal process used to ensure that changes to a product or system areintroduced in a controlled and coordinated manner. It reduces the possibility thatunnecessary changes will be introduced to a system without forethought, introducingfaults into the system or undoing changes made by other users of software. The goals ofa change control procedure usually include minimal disruption to services, reduction inback-out activities, and cost-effective utilization of resources involved in implementingchange.

Therefore, an effective change control system is a key component of any qualityassurance system.

42

43

Page 50: Quality tools

Change Control and Configuration Management

43

Page 51: Quality tools

Change Control and Configuration Management

43

Page 52: Quality tools

CONFIGURATION MANAGEMENT SYSTEM A configuration management system includes the set of policies, practices, and tools

that help an organization maintain software configurations. The primary purpose of aconfiguration management system is to maintain the integrity of the software artifactsof an organization. Consequently, configuration management systems identify thehistory of software artifacts and their larger aggregate configurations, systematicallycontrol how these artifacts change over time, and maintain interrelationships amongthem.

PrinciplesPrinciple I: Protect critical data and other resources. The process of developing software produces many artifacts. Some of these artifacts

include the definition of requirements, design specifications, work breakdown structures,test plans, and code. All of these artifacts generally undergo numerous revisions as theyare created. The loss of such artifacts and their revisions can cause great harm (e.g.,financial loss, schedule slip) to an organization. Thus, it is vital that these artifacts andtheir interrelationships be reliably maintained. This implies that these artifacts arealways accessible to consumers or quickly recoverable when failure does occur.

Change Control and Configuration Management

44

Page 53: Quality tools

Change Control and Configuration Management

Principle 2: Monitor and control software development procedures and processes. An organization should define the processes and procedures that it uses to produce

artifacts. Such definition will provide a basis for measuring the quality of the processes andprocedures. However, to produce meaningful measures of the processes and procedures,the organization must follow them. Consequently, the organization must monitor itspractitioners to ensure that they follow the software development processes andprocedures.

Principle 3: Automate processes and procedures when cost effective. The automation of processes and procedures has two primary benefits. First, it guarantees

that an organization consistently applies them, which means that it is more likely toproduce quality products. Second, automation improves the productivity of the peoplethat must execute the processes and procedures because such automation reduces thetasks that they must perform, which permits them to perform more work. 44

Page 54: Quality tools

Change Control and Configuration Management

Principle 4: Provide value to customers. Three issues ultimately affect the success of a product. The first one is that a product

must reliably meet the needs of its customers. That is, it must provide the desiredfunctionality and do it in a consistent and reliable manner. Second, a product should beeasy to use. Third, an organization must address user concerns and issues in a timelymanner. All three of these issues affect customer value, and a configurationmanagement tool should automate those practices that provide the greatest value to itsuser community.

Principle 5: Software artifacts should have high quality. There are many measures of product quality. Such measures attempt to identify several

qualities of a product, such as its adaptability, efficiency, generality, maintainability,reliability, reusability, simplicity, and understandability.

Principle 6: Software systems should be reliable. Software systems should work as their users expect them to function. They also should

have no significant defects, which means that software systems should never causesignificant loss of data or otherwise cause significant harm. Thus, these systems shouldbe highly accessible and require little maintenance.

44

Page 55: Quality tools

Change Control and Configuration Management

Principle 6: Software systems should be reliable. Software systems should work as their users expect them to function. They also should

have no significant defects, which means that software systems should never causesignificant loss of data or otherwise cause significant harm. Thus, these systems shouldbe highly accessible and require little maintenance.

Principle 7: Assure that products provide only necessary features, or those having highvalue. Products should only provide the required features and capabilities desired by their

users. The addition of nonessential features and capabilities that provide little, if any,value to the users tends to lower product quality. Besides, an organization can betteruse the expended funds in another manner.

Principle 8: Software systems should be maintainable. Maintainable software systems are generally simple, highly modular, and well designed

and documented. They also tend to exhibit low coupling. Since most software is used formany years, maintenance costs for large software systems generally exceed originaldevelopment costs 44

Page 56: Quality tools

Change Control and Configuration Management

Principle 9: Use critical resources efficiently. Numerous resources are used or consumed to develop software, as well as by the

software products themselves. Such resources are generally scarce and an organizationshould use them as efficiently as possible.

Principle IO: Minimize development effort. Human effort is a critical resource, but one that is useful to distinguish from those that

do not involve personnel. The primary motivation to efficiently use human resources isto minimize development costs. In addition, the benefits of minimizing the number ofpersonnel used to develop software increases at a greater than linear rate. 44

Page 57: Quality tools

Change Control and Configuration Management

45

CM IN HARDWARE AND PRODUCTConfiguration Management (CM) is the application of appropriate resources, processes,and tools to establish and maintain consistency between the product requirements, theproduct, and associated product configuration information.

Page 58: Quality tools

Change Control and Configuration Management

CM facilitates orderly identification of product attributes, and: Provides control of product information. Manages product changes that improve capabilities, correct deficiencies, improve

performance, enhance reliability and maintainability, or extend product life. Manages departures from product requirements. BOM Management

Reuse assemblies/parts Baseline Management As-Built, As-Designed, As-Maintained tracking Action Item tracking Configuration Management best practices built-in

Embedded Rules base Item Definition Multi- and Single-level Used-On Queries Change Tracking – more than just a form Configuration Item Identification Multiple Product Line Management 45

Page 59: Quality tools

Verification and

Validation

Page 60: Quality tools

DEFINITION

is the generic name given to checking processes which ensure thatthe product and process conforms to its specification and meets theneeds of the customer.

starts with requirements reviews and continues through design andcode reviews to product testing.

are independent procedures that are used together for checkingthat a product, service, or system meets requirementsand specification and that it fulfills its intended purpose.

Verification and Validation

1

1

2

Page 61: Quality tools

Verification and Validation

Verification and validation is an important key in quality tools and andtechniques.

The results of verification and validation forms an importantcomponent in the safety case, which is a document used to supportcertification.

Thorough verification and validation does not prove that the systemis safe or dependable, and there is always a limit to how much testingis enough testing.

Page 62: Quality tools

Verification is the confirmation, though objectiveevidence, that the specified requirements have beenfulfilled. Verification tasks all point back to therequirements. Does the design correctly and completelyembody the requirements? Is the implementation acorrect representation of the requirements? Is thesystem being built right?

Validation is the confirmation, through objectiveevidence, that the system will perform its intendedfunctions. The intended functions, and how well thesystem performs those functions, are determined by thecustomer. Did you create the system the customer reallywanted? Will the system fulfill the customer's needs? Isthis the right system for the customer?

3

3

DIFFERENCE

Verification and Validation

Page 63: Quality tools

VERIFICATION TECHNIQUESThere are many different verification techniques but they all basically fall into 2 major categories - dynamic testing and static testing.

Dynamic testing - Testing that involves the execution of a system or component. Basically, anumber of test cases are chosen, where each test case consists of test data. These inputtest cases are used to determine output test results. Dynamic testing can be further dividedinto three categories - functional testing, structural testing, and random testing.

Functional testing - Testing that involves identifying and testing all the functions of thesystem as defined within the requirements. This form of testing is an example of black-boxtesting since it involves no knowledge of the implementation of the system.

Structural testing - Testing that has full knowledge of the implementation of the system and is an example of white-box testing. It uses the information from the internal structure of a system to devise tests to check the operation of individual components. Functional and structural testing both chooses test cases that investigate a particular characteristic of the system. 4

Verification and Validation

Page 64: Quality tools

Random testing - Testing that freely chooses test cases among the set of all possible test cases. The use of randomly determined inputs can detect faults that go undetected by other systematic testing techniques. Exhaustive testing, where the input test cases consists of every possible set of input values, is a form of random testing. Although exhaustive testing performed at every stage in the life cycle results in a complete verification of the system, it is realistically impossible to accomplish. [Andriole86]

Consistency techniques - Techniques that are used to insure program properties such as correct syntax, correct parameter matching between procedures, correct typing, and correct requirements and specifications translation.

Measurement techniques - Techniques that measure properties such as error proneness, understandibility, and well-structuredness.

4

Verification and Validation

Page 65: Quality tools

VALIDATION TECHNIQUESThere are also numerous validation techniques, including formal methods, fault injection, and dependability analysis. Validation usually takes place at the end of the development cycle, and looks at the complete system as opposed to verification, which focuses on smaller sub-systems.

Formal methods - Formal methods is not only a verification technique but also a validation technique. Formal methods means the use of mathematical and logical techniques to express, investigate, and analyze the specification, design, documentation, and behaviorof both hardware and software.

Fault injection - Fault injection is the intentional activation of faults by either hardware or software means to observe the system operation under fault conditions.

Hardware fault injection - Can also be called physical fault injection because we are actually injecting faults into the physical hardware.

5

Verification and Validation

Page 66: Quality tools

Software fault injection - Errors are injected into the memory of the computer by software techniques. Software fault injection is basically a simulation of hardware fault injection.

Dependability analysis - Dependability analysis involves identifying hazards and then proposing methods that reduces the risk of the hazard occuring.

Hazard analysis - Involves using guidelines to identify hazards, their root causes, and possible countermeasures.

Risk analysis - Takes hazard analysis further by identifying the possible consequences of each hazard and their probability of occuring.

5

Verification and Validation

Page 67: Quality tools

Risk Management

Tools

Page 68: Quality tools

Risk Management Tools

“Risk is all about uncertainty or, more importantly, the effect ofuncertainty on the achievement of objectives. The really successfulorganizations work on understanding the uncertainty involved inachieving their objectives and ensuring they manage their risks so asto ensure a successful outcome.”

-Kevin Knight, International Organization for Standardization (ISO)

Page 69: Quality tools

Risk Management Tools

What is Risk Management?

The objective of risk management is to increase the probability and impact ofpositive events and decrease the impact and probability of negative events.

Good risk management helps a project’s stakeholders define the strengths andweaknesses of a project, promoting awareness.

The process of identification, analysis and either acceptance or mitigation ofuncertainty in investment decision-making. Essentially, risk management occursanytime an investor or fund manager analyzes and attempts to quantify thepotential for losses in an investment and then takes the appropriate action (orinaction) given their investment objectives and risk tolerance. Inadequate riskmanagement can result in severe consequences for companies as well asindividuals.

6

7

Page 70: Quality tools

Risk Management Tools

Methods For Managing RiskThere are four main ways to manage risk: risk avoidance, risk transfer, risk reduction and riskacceptance. Each is applicable under different circumstances. Some ways of managing risk fallinto multiple categories. Multiple ways of managing risk are often utilized simultaneously.

Page 71: Quality tools

Risk Management Tools

Risk Avoidance (elimination of risk)

Completely avoiding an activity that poses a potential risk. While attractive, this is not alwayspractical. By avoiding risk we forfeit potential gains, be it in life, in business or in withinvestments.

The Business Dictionary defines Risk Avoidance as a technique of risk management thatinvolves:

Taking steps to remove a hazard Engaging in an alternative activity End a specific exposure

Example: utility may opt to invest in nucleargeneration in lieu of coal generation to avoidthe foreseen risk of onerous green housegas regulation.

8

6

Page 72: Quality tools

Risk Management Tools

Risk Transfer (insuring against risk)

Most commonly, this is to buy an insurance policy. The risk is transferred to a third-partyentity (in most cases an insurance company). To be more clear, the financial risk istransferred to a third-party. For example, a homeowner’s insurance policy does nottransfer the risk of a house fire to the insurance company, it only transfers the financialrisk. A house fire is still just as likely as before. Risk sharing is also a type of risk transfer. Forexample, members assume a smaller amount of risk by transferring and sharing theremainder of risk with the group.

Risk can be transferred away from the organizationmanaging the project. Examples :

Warranty Insurance Contracting to third parties

8

Page 73: Quality tools

Risk Management Tools

Risk Reduction (mitigating risk)

This is the idea of reducing the extent or possibility of a loss. This can be done by increasing precautions or limiting the amount of risky activity.

The process of identifying, assessing, and controlling, risks arising from operational factors and making decisions that balance risk cost with mission benefits.

Example: installing a security alarm smoke detectors wearing a seat belt or wearing a helmet

8

9

Page 74: Quality tools

Risk Management Tools

Risk Retention (accepting risk)

Risk retention simply involves accepting the risk. Even if the risk is mitigated, if it is notavoided or transferred, it is retained. Retention is effective for small risks that do not poseany significant financial threat.

All businesses accept risk during their operations. Without risk, commerce would cease toexist.

For good risk management it is important to determine a quantified level of risk the projectis willing to take.

Example: your project may be sensitive to future price adjustments in the market in whichyou compete. If the volatility of prices is expected to be under a threshold defined bymanagement, management may accept the risk and proceed with the project.

8

9

Page 75: Quality tools

Risk Management Tools

Tools and methods for

estimating and controlling risk

failure mode and effects analysis (FMEA),

hazard analysis and critical control points (HACCP),

critical to quality (CTQ)

analysis, health hazard analysis (HHA),

Page 76: Quality tools

Risk Management Tools

Failure Modes and Effects Analysis (FMEA) Tool

Failure Modes and Effects Analysis (FMEA) is a systematic, proactive method forevaluating a process to identify where and how it might fail and to assess the relativeimpact of different failures, in order to identify the parts of the process that are most inneed of change.

Failure modes and effects analysis (FMEA) is a step-by-step approach for identifying allpossible failures in a design, a manufacturing or assembly process, or a product or service.

FMEA includes review of the following: Steps in the process Failure modes (What could go wrong?) Failure causes (Why would the failure happen?) Failure effects (What would be the consequences of each failure?)

10

11

Page 77: Quality tools

Risk Management Tools

When to Use FMEA? When a process, product or service is being designed or redesigned, after quality

functional deployment.

When an existing process, product or service is being applied in a new way.

Before developing control plans for a new or modified process.

When improvement goals are planned for an existing process, product or service.

When analyzing failures of an existing process, product or service.

Periodically throughout the life of the process, product or service

11

Page 78: Quality tools

Risk Management Tools

Hazard analysis and critical control points or HACCP

A systematic preventive approach to food safety from biological, chemical, and physicalhazards in production processes that can cause the finished product to be unsafe, anddesigns measurements to reduce these risks to a safe level. In this manner, HACCP isreferred as the prevention of hazards rather than finished product inspection.

The HACCP approach focuses on preventing potential problems that are critical to foodsafety known as 'critical control points' (CCP) through monitoring and controlling each stepof the process. HACCP applies science-based controls from raw materials to finishedproduct. It uses seven principles standardized by the Codex Alimentarius Commission.12

Page 79: Quality tools

Risk Management Tools

7 HACCP PRINCIPLES

Page 80: Quality tools

Risk Management Tools

Benefits of HACCP

Although the main goal of HACCP is food protection, there are other benefits acquired through HACCP implementation, such as:

Increase customer and consumer confidence Maintain or increase market access Improve control of production process Reduce costs through reduction of product losses and rework Increase focus and ownership of food safety Business liability protection Improve product quality and consistency Simplify inspections primarily because of the recordkeeping and

documentation Alignment with other management systems (ISO 22000)

12

Page 81: Quality tools

Risk Management Tools

Critical to Quality

A critical to quality (CTQ) is the flowchart process of identifying quality features orcharacteristics in regard to the customer and to identify the problems. This is the processof analyzing the inputs and outputs and find out the path that influence the standard orquality of process outputs. CTQ analysis consists of the physical measurement of height,width, depth and weight. They depict the necessities of quality but have deficiency in thespecificity to be measurable.

The flowchart of critical to quality helps in the process of finding out quality features ofthe product keeping in view the customer and also with the outlook to categorize theproblems.

CTQs (Critical to Quality) analyze the characteristics of the service or product that aretermed by both the internal and external customer. They may include the upper andlower specification limits or any other factors related to the product or service. Accordingto the interpretation of a valued customer, a perfect CTQ analysis is an actionable andqualitative business specification methodology. 13

Page 82: Quality tools

Risk Management Tools

Steps to Implement and also create a CTQ Tree:

Determine the Basic Requirement of the Customer: Initially, the sigma team finds out thebasic requirement of the customers for the service or the given product. Generally, thisbasic requirement is pointed out in the comprehensive terms in order to accomplish therequirement of the customer.

Identify the First level of Requirements of Customers: Secondly, the sigma team finds outtwo or three requirements that can solve the basic customer's need mentioned in the initialstage of the critical to quality tree. This ensures that the phones are responded instantly bythe professionals.

Identify the Customer's Second Tier of Requirements: Thirdly, again the sigma team findsout three or two requirements which can solve the basic customer's need mentioned in thesecond stage of the critical to quality tree. This ensures that the professionals are availableround-the-clock to respond to the queries of the customers.

13

Page 83: Quality tools

Risk Management Tools

Bring to an end when the Quantifiable Requirements Reaches the limit: The fourth step isimplemented when the team arrives at the requirement which can easily be measured.

Confirm Final Requirements with the Customers: The last step is applicable when all theneeds on the Critical to Quality tree reach a standard level after due confirmation with thecustomer.

Advantages of CTQ Tree are: It helps in transforming unspecific customer requirements into precise

requirements. It aids sigma teams in detailing broader specification. It gives assurance that all the characteristics of the requirements are to be

fulfilled.

13

Page 84: Quality tools

Risk Management Tools

Health Hazard Analysis (HHA)

The Health Hazard Analysis/Assessment is used to systematically identify and evaluatehealth hazards, evaluate proposed hazardous materials, and propose measures toeliminate or control these hazards through engineering design changes or protectivemeasures to reduce the risk to a level acceptable to the customer.

The HHA evaluation phase determines the quantities of potentially hazardous materials orphysical agents (e.g., noise, radiation, heat stress, cold stress) involved with the system,analyzes how these materials or physical agents are used in the system, estimates whereand how personnel exposures may occur and if possible the degree or frequency ofexposure involved.

Materials are evaluated if, because of their physical, chemical, or biological characteristics;quantity; or concentrations, they cause or contribute to adverse effects in organisms or off-spring, pose a substantial present or future danger to the environment, or result in damageto or loss of equipment or property during the system's life cycle.

14

Page 85: Quality tools

Risk Management Tools

The HHA Purpose:

Provide a design safety focus from the human health viewpoint. Identify hazards directly affecting the human operator from a health standpoint.

Page 86: Quality tools

Risk Management Tools

Steps on HHA Process

The first step of the HHA is to identify ergonomic hazards, quantities of potentiallyhazardous materials, and exposure to physical agents (noise, radiation, heat stress, coldstress) used with the system and its logistical support.

The next step is to analyze how these potential hazards are used in the system. Based onthis information, estimate occurrences of personnel exposures to include (if possible) thedegree or frequency of exposure.

The final step is to incorporate into the system design cost-effective controls to reduceexposures to acceptable levels.

As the system design evolves, the HHA increases in fidelity and level of detail. Sources of datafor HHA include safety, test, and capabilities documentation, and lessons learned from legacysystems. 15

Page 87: Quality tools

Six levels of CognitionBased on Bloom’s Taxonomy

Page 88: Quality tools

Six levels of Cognition Based on Bloom’s Taxonomy

LEVEL DEFINITIONSAMPLEVERBS

KNOWLEDGE

Student recalls orrecognizes information,

ideas, and principlesin the approximateform in which they

were learned.

WriteList

LabelNameState

Define

COMPREHENSION

Student translates,comprehends, or

interprets informationbased on prior

learning.

ExplainSummarizeParaphrase

DescribeIllustrate

APPLICATION

Student selects, trans-fers, and uses dataand principles to

complete a problemor task with a mini-mum of direction.

UseCompute

SolveDemonstrate

ApplyConstruct

Page 89: Quality tools

Six levels of Cognition Based on Bloom’s Taxonomy

ANALYSIS

Student distinguishes,classifies, and relates

the assumptions,hypotheses, evidence,

or structure of astatement or question.

AnalyzeCategorizeCompareContrastSeparate

SYNTHESIS

Student originates,integrates, and

combines ideas into aproduct, plan or

proposal that is newto him or her.

CreateDesign

HypothesizeInvent

Develop

EVALUATION

Student appraises,assesses, or critiqueson a basis of specific

standards and criteria.

JudgeRecommend

CritiqueJustify

47

Page 90: Quality tools

1. http://www.sqa.org.uk/e-learning/SDPL03CD/page_16.htm2. Global Harmonization Task Force - Quality Management Systems - Process Validation

Guidance (GHTF/SG3/N99-10:2004 (Edition 2) page 33. http://www.hq.nasa.gov/office/codeq/software/ComplexElectronics/p_vv.htm4. Andriole, Stephen J., editor, Software Validation, Verification, Testing, and

Documentation, Princeton, NJ: Petrocelli Books, 1986.5. Kopetz, Herman, Real-Time Systems: Design Principles for Distributed Embedded

Applications, Boston, MA: Kluwer Academics Publishers, 1997.6. https://www.siue.edu/business/symposia/pdf/PM_Symposium_Risk_PPT.pdf.7. http://www.investopedia.com/terms/r/riskmanagement.asp8. http://www.maysfinancial.com/insurance/ways-managing-risk/9. http://www.thefreedictionary.com/Risk+reduction10. http://www.ihi.org/resources/Pages/Tools/FailureModesandEffectsAnalysisTool.aspx11. http://asq.org/learn-about-quality/process-analysis-tools/overview/fmea.html12. http://www.gov.mb.ca/agriculture/food-safety/at-the-food-

processor/haccp/index.html?print13. http://sixsigmabasics.com/six-sigma/statistics/critical-to-quality.html14. http://www.hcrq.com/hha.html

References:

Page 91: Quality tools

15. http://216.54.19.111/~mountaintop/ssse/scopage_dir/ssse/ana.html16. http://www.valuecreationgroup.com/process_variation.htm17. http://en.wikipedia.org/wiki/Common_cause_and_special_cause_(statistics)18. http://www.isixsigma.com/dictionary/common-cause-variation/19. Shewhart, Walter A. (1931). Economic control of quality of manufactured product. New

York City: D. Van Nostrand Company, Inc. p. 7. OCLC 1045408.20. Western Electric Company (1956). Introduction to Statistical Quality

Control handbook(1 ed.). Indianapolis, Indiana: Western Electric Co. pp. 23–24. OCLC 33858387.

21. http://www.therevenuecyclenetwork.com/systemproblemsvsnonsystemproblems22. Shewhart, Walter A. (1931). Economic control of quality of manufactured product. New

York City: D. Van Nostrand Company, Inc. p. 14. OCLC 104540823. Mark Graham Brown, Using the Right Metrics to Drive World-class Performance24. Measuring Project Health Neville Turbit, 200825. Andy D. Neely, Business Performance Measurement: Theory and Practice26. Mark Graham Brown, How to Interpret the Baldrige Criteria for Performance Excellence27. http://4squareviews.com/2012/12/14/six-sigma-green-belt-process-performance-

metrics/

References:

Page 92: Quality tools

Reference

28. http://www.isixsigma.com/tools-templates/capability-indices-process-capability/cp-cpk-pp-and-ppk-know-how-and-when-use-them/

29. http://www.isixsigma.com/tools-templates/capability-indices-process-capability/process-capability-cp-cpk-and-process-performance-pp-ppk-what-difference/

30. http://elsmar.com/pdf_files/CPK.pdf31. Grubbs, F. E. (February 1969), "Procedures for detecting outlying observations in

samples", Technometrics 11 (1): 1–21, doi:10.1080/00401706.1969.10490657, "An outlyingobservation, or "outlier," is one that appears to deviate markedly from other members ofthe sample in which it occurs.“

32. Grubbs 1969, p. 1 stating "An outlying observation may be merely an extrememanifestation of the random variability inherent in the data. ... On the other hand, anoutlying observation may be the result of gross deviation from prescribed experimentalprocedure or an error in calculating or recording the numerical value.“

33. http://pareonline.net/getvn.asp?v=9&n=634. http://people.uncw.edu/pricej/teaching/statistics/outliers.htm35. Kreyszig, Erwin (2006). Advanced Engineering Mathematics, 9th Edition. p. 1248. ISBN 978-

0-471-48885-9.36. http://www.sqconline.com/about-acceptance-sampling

Page 93: Quality tools

Reference

37. http://www.utdallas.edu/~metin/Ba3352/QualityAS.pdf38. http://www.pitt.edu/~super7/43011-44001/43911.ppt.39. https://www.statpac.com/surveys/sampling.htm40. Audit Procedures 2008 By Luis Puncel41. http://www.answers.com/Q/What_is_consumer_risk_and_what_is_producer_risk42. http://en.wikipedia.org/wiki/Change_control43. http://www.gmp-publishing.com/media/files/leitartikel_2012/LOGFILE-31-2012-Change_Management.pdf44. http://trs-new.jpl.nasa.gov/dspace/bitstream/2014/10507/1/02-2525.pdf45. http://cmstat.com/solutions/pdmplus/hardware-configuration-management/46. http://en.wikipedia.org/wiki/Configuration_management#Software47. http://www.edpsycinteractive.org/topics/cognition/bloom.html48. http://www.businessdictionary.com/definition/variation.html

Page 94: Quality tools

THANKS!

– On the [View] menu, point to [Master], and then click [Slide Master] or [Notes Master]. Change images to the one you like, then it will apply to all the other slides.