Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M....

135
Module I. Introduction to Quality Management Lecture – 1 How the concept of Quality Management evolved over time? From ancient time, quality of goods and services are monitored directly or indirectly. If we look at construction of the pyramid, Greek ancient arts, crafts, and architectures, Roman-built cities, it clearly demonstrates artists and engineers commitment for achieving the excellence in quality. However, till 1800, production of goods and services was primarily done by small group of individuals. These small groups were often family businesses. Thus, the standard of quality was controlled and set by individual who was in turn also responsible for producing the item. This phase, comprising the time period up to 1900, is called the period of ‘Operator Quality Control’. The entire product was manufactured by a single person (or operator) or by small group of persons, who essentially controlled quality. Thus, controlling and improving quality of the product was aligned with the philosophy of pride in workmanship. From early 1900s to 1920, a second phase evolved, which called the ‘Foreman Quality Control’ period. In this phase, the concept of mass production with little emphasis on personal accomplishment at work place was introduced. Supervisors are responsible to ensuring that quality was achieved. Foremen or supervisors controlled the quality of the product, and they were also responsible for the shop floor operations. The period of 1920 to 1940 saw the next phase of quality. This phase was so-called ‘Inspection Quality Control’. With more complicated products and processes it became impossible for to keep close watch over individual stages of operation. Inspectors were assigned to check the quality of a product after processing. Individual product standards were set and any discrepancies between standard and actual product features was reported. Defective items were set aside as scrap, and few items with minor defects are reworked to meet the specified standard or specification. In this period, statistical process control aspects of quality were also popularized, and gained widespread application in industries. In 1924, Walter A. Shewhart of Bell Telephone Laboratories introduced the concept of statistical charts to monitor variability of product characteristics. These charts were called control charts. In the latter half of 1920s, H. F. Dodge and H. G. Romig, also from Bell Telephone Laboratories, proposed acceptance sampling plans

Transcript of Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M....

Page 1: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Module I. Introduction to Quality Management

Lecture – 1 How the concept of Quality Management evolved over time?

From ancient time, quality of goods and services are monitored directly or indirectly. If we

look at construction of the pyramid, Greek ancient arts, crafts, and architectures, Roman-built

cities, it clearly demonstrates artists and engineers commitment for achieving the excellence

in quality. However, till 1800, production of goods and services was primarily done by small

group of individuals. These small groups were often family businesses. Thus, the standard of

quality was controlled and set by individual who was in turn also responsible for producing the

item. This phase, comprising the time period up to 1900, is called the period of ‘Operator Quality

Control’. The entire product was manufactured by a single person (or operator) or by small group

of persons, who essentially controlled quality. Thus, controlling and improving quality of the

product was aligned with the philosophy of pride in workmanship.

From early 1900s to 1920, a second phase evolved, which called the ‘Foreman Quality Control’

period. In this phase, the concept of mass production with little emphasis on personal

accomplishment at work place was introduced. Supervisors are responsible to ensuring that

quality was achieved. Foremen or supervisors controlled the quality of the product, and they

were also responsible for the shop floor operations.

The period of 1920 to 1940 saw the next phase of quality. This phase was so-called ‘Inspection

Quality Control’. With more complicated products and processes it became impossible for to

keep close watch over individual stages of operation. Inspectors were assigned to check the

quality of a product after processing. Individual product standards were set and any

discrepancies between standard and actual product features was reported. Defective items were

set aside as scrap, and few items with minor defects are reworked to meet the specified standard or

specification. In this period, statistical process control aspects of quality were also popularized,

and gained widespread application in industries. In 1924, Walter A. Shewhart of Bell Telephone

Laboratories introduced the concept of statistical charts to monitor variability of product

characteristics. These charts were called control charts. In the latter half of 1920s, H. F. Dodge

and H. G. Romig, also from Bell Telephone Laboratories, proposed acceptance sampling plans

Page 2: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

for inspection. These plans substituted the concept of 100 percent inspection. During 1930’s

application of acceptance sampling plans was in full flow in industries. In 1929, Walter

Shewhartwith the help of American Society for Testing Materials (ASTM), American Society of

Mechanical Engineers (ASME,), American Statistical Association (ASA), and Institute of

Mathematical Statistics (IMS) created the joint committee for the development of statistical

techniques for application in engineering industries.

The phase of ‘Statistical Quality Control’ was between 1940 and 1960.During World War II, the

principles of sampling inspection plan was extremely useful. The American Society for Quality

Control (ASQC) was formed in 1946. A set of sampling inspection plan for attributes, so-called

MIL-STD-105A was developed in 1950. These plans underwent various modifications, viz.

MIL-STD-105B, MILSTD-105C, MIL—STD-105D, and MIL-STD-105E. In addition, during

1957, a set of sampling plans for variables called MIL-STD-414 was also proposed. Juran

published his Quality Control Handbook in 1957.

Use of quality control procedures and benefits of statistical quality control was not explored in

most of the U.S. industries. This may be due to monopoly market. However, Japan after World

War II, embraced the new philosophy wholeheartedly. Edwards Deming was invited to Japan

during 1950, and Japanese engineers were convinced about the importance of statistical quality

control as a means to gaining competitive advantage in world economy. Another quality guru, J.

M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

plays to achieve end quality. Thus, they started to develop strong commitment to train and educate

their employees on statistical process control.

The next phase of quality during 1960 is known as Total Quality Control. An important feature

during this phase was involvement of several departments and personnel in the quality

development process. Prior to this period, the attitude was quality is the responsibility of the

inspection. In 1960s, there was a change in this attitude. Employees began to understand that

each department within an organization has a contribution to build quality in an item. Concept

of zero defects, which encircle around achieving productivity through worker involvement,

emerged during this period. With more or less same underlying philosophy, quality circles were

introduced in many Japanese industries. The concept of quality circles is based on participative

Page 3: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

or team work style of management. It believes that quality and productivity can be achieved

through informal group discussion, decision, and pertinent action.

1970 is the phase of ‘Total Quality Management’. This phase involved the participation of

everyone in the organization, from the operator to supervisor, manager, and even the chief

executive officer. Quality was responsibility of every individual. Feigenbaum, another quality

guru, defines the philosophy as:

‘A quality practice that is agreed on companywide and plant wide operating work structure,

documented in effective, integrated technical and managerial procedures, for guiding the

coordinated actions of the people, the machines, and the information of the organization in the best

and most practical ways to assure customer quality satisfaction and economical costs of quality.’

1970 also marked the extensive use of a graphical tool known as the cause-and-effect diagram.

Also in this decade, G. Taguchi of Japan introduced the concept of robust design in statistical

experimentation.

During 1980s, various quality control and statistical software came into the market. The notion

of a total quality management increased the emphasis on supplier’s quality, product design,

quality assurance. Ford Motor, Daimler Chrysler and General Motors Corporation adopted the

quality philosophy and insisted supplier to adopt various quality control and quality improvement

techniques.

In 1989, Motorola started the Six Sigma initiative, a quality philosophy driven by statistical

approach for decision making, which within 10 years was sincerely adopted by various other

companies.

Page 4: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Module I. Introduction to Quality Management

Lecture 2 - What do we mean by Product and Service Quality?

‘Quality’ can be defined as a standard measure of how well a product or service conforms to the

specified standards, so as to meet the customer requirements. Quality has been defined by

various quality gurus. Juran defined quality as ‘fitness for use’ in 1974. Crosby, in 1979, defined

the quality as ‘conformance to requirements or specifications.’ Garvin, in 1984 divides the

definition of quality into five major categories—namely, transcendent, product-based, user-

based, manufacturing-based and value-based. In addition, he also identifies eight attributes

that may be used to define product quality: performance, features, reliability, conformance,

durability, serviceability, aesthetics, and perceived quality. The definition proposed by Crosby

seems more appropriate from both service and manufacturing perspective. However, terms such

as delighting customers, robustness, reducing variability can also be associated when

organization talk about quality. The driving force to determine the level of quality that should

be designed into a product of service is ‘The customer’. Quality also has a time dimension.

In other words, as the need and preference of customer changes with time, the level of quality

or degree of customer satisfaction also changes. Thus, quality, in this sense, is not constant.It is

a crucial parameter that differentiates an organisation from its competitors.Thus, term quality

also implies different levels of expectations from different segment of consumers.

Now, who are the customers?

There are two distinct types of customers- external and internal. An external customer may be

the one who uses the end product or service, the one who purchases the end product or service,

or the one who influences the sale of the end product or service. An external customer exists

outside the organization.

An internal customer is also important. Every function within an organization, whether it is

engineering, order processing, or production has an internal customer. That means each function

receives a product or service from another function and, in exchange, provides a product or

service to a subsequent function. From process perspective, every process is considered a

Page 5: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

customer of the preceding process. Grinding process can be internal customer to boring process.

A diagrammatic representation is provided below (Figure1-1) to show internal and external

customer in the supply network perspective.

Figure 1-1: Customer from basic Supply network Perspective

From the customers' perspective, service is the combination of the customers experience and

their perception of the outcome of the service. The term service quality (Johnston and Clark,

2008) is often used to mean different aspect. Some may use the term to mean how the customer

is treated overall. This may be more accurately called ‘quality of service’, as opposed to service

quality, which can mean the entirety of outcome and experience. Sometimes service quality is

used to mean the same as satisfaction, i.e. perceived service quality.

Few definitions, which are important while we discuss on product quality is worth

mentioning:

Quality Characteristics

There can be more than one element that defines the intended quality level of a

product or service. These elements are so-called ‘quality characteristics’. Quality

characteristics may be of several types (Montgomery and Runger, 2010). It may be

Physical: length. weight, voltage, viscosity, or

Sensory: taste, appearance, color, or

Time Orientation: reliability, durability, serviceability

Page 6: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Thus, the gross weight of a coke cans, the tensile strength of a bar, the specific

gravity of a liquid, and so on. There may be some intangible characteristics, such as the

taste, smell beauty. There can also be ethical characteristics such as honesty, courtesy,

friendliness which are difficult to measure and define.

Variables and Attributes

Quality characteristics can fall into two broad classes, viz. variables and attributes.

Characteristics that are measurable and are expressed on a numerical scale (ordinal or interval)

are called variables. The diameter of a bore expressed in millimeters is a variable, or density of

a liquid in grams per cubic centimeter or customer satisfaction expressed in a scale of 7. If we

express characteristics only in terms of conforming and defective, it falls in the attribute or

nominal category.

Defects and Defective Unit

Before defining an attribute, the terms defect and a defective unit should be defined.

Defect is a quality characteristic that do not meet its stipulated specifications. Let's say the

specifications of the thickness of steel washers are 3 ± 0.1 millimeters (mm). If we have a

washer with a thickness of 3.15 min, then its thickness is a defect.

The American National Standards Institute (ANSI) and the American Society for Quality Control

(ASQC) provides a definition of a defect as stated in ANSI/ASQC Standard:

‘A defect is a departure of a quality characteristic from its intended level or state that occurs

with a severity sufficient to cause an associated product or service not to satisfy intended

normal or reasonably foreseeable usage requirements.’

A defective unit has one or more defects such that the unit is unable to meet the intended

standard or use and is unable to function as required. An example of a defective unit might be a

cast iron cylinder that has an internal diameter and a weight that both fail to satisfy

specifications, thereby making the unit dysfunctional.

Page 7: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

A quality characteristic is said to be an attribute if it is classified as either conforming or

defective to a stipulated specification. A quality characteristic that is not measured on a

numerical scale is expressed as an attribute. For example, the smell of a perfume is characterized

as either acceptable or not; the color of a cloth is either acceptable or not. Variables are treated

as attributes because of their simplicity to measure them this way or because it is difficult to

obtain data on them. Many examples may be cited in this category. For instance, the diameter of a

engine cylinder is, in theory, a variable. However, if we measure the diameter using a go/no-go

gauge and classify it as either conforming or defective (with respect to some established

specifications), then the characteristic is expressed as an attribute. The reasons for using a

go/no-go gauge, as opposed to a micrometer, could be economical. A measurement by go/no-

go gauge may be much shorter and consequently less expensive.

Standard or Specification

As the definition of quality involves meeting the requirements of the customer, these requirements

need to be documented. A standard, or a specification, refers to a precise statement that validates the

requirements of the customer; it may relate to a product, a process, or a service. For example, the

specifications for a bore might be 3 ± 0.1 centimeters (cm) for the inside diameter, 5 ± 0.2 cm for the

outside diameter, and 12 ± 0.5 cm for the length. This means that for the bore to be acceptable to the

customer, all the three dimensions must be within the specified limit.

Page 8: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Module I. Introduction to Quality Management

Lecture 3 - What are the dimensions of quality?

Before we discuss on dimensions of quality, we must discuss three aspects associated with

definition of quality: quality of design, quality of conformance, and quality of performance.

Quality of Design

Quality of design is all about set conditions that the product or service must minimally have to satisfy

the requirements of the customer. Thus, the product or service must be designed in such a way so

as to meet at least minimally the needs of the consumer. However, the design must be simple and

also less expensive so as to meet the customers' product or service expectations. Quality of design is

influenced by many factors, such as product type, cost, profit policy, demand of the product, avail-

ability of parts and materials, and product reliability.

Quality of Conformance

Quality of conformance is basically meeting the standards defined in the design phase after the

product is manufactured or while the service is delivered. This phase is also concerned about quality

is control starting from raw material to the finished product. Three broad aspects are covered in this

definition, viz. defect detection, defect root cause analysis, and defect prevention. Defect prevention

deals with the means to deter the occurrence of defects and is usually achieved using statistical process

control techniques. Detecting defects may be by inspection, testing or statistical data analysis collected

from process. Subsequently, the root causes behind the presence of defects are investigated, and finally

corrective actions are taken to prevent recurrence of the defect.

Quality of Performance

Quality of performance is how well the product functions or service performs when put to use. It

measures the degree to which the product or Service satisfies the customer from the perspective of

both quality of design and the quality of conformance. Meeting customer expectation is the focus

when we talk about quality of performance. Automobile industry conduct test drive of vehicles to

collect information about mileage, oil consumption. Bulbs are life tested to understand its reliability

during useful life. Customer survey is conducted to find customer’s perception about service

Page 9: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

delivered. If product or service does not live up to customer expectation, then adjustments are needed

in the design or conformance phase.

Garvin (1984) also provides discussion of eight critical dimensions of product quality. The

summarized key points concerning these dimensions of quality is provided below.

Performance (will the product do the intended job in field?)

This we have already discussed. It talks about evaluation of product or service performance with

respect to certain specific functions and determine how well it performs from customer’s

perspective.

Reliability (how often the product can fail within a stipulated time?)

It talks about probability of not failing of components of say automobiles or airbus while on

service for a specified time period. Less the reliability, more the chances of repair or

replacement.

Durability (how long can the product last?)

This is the effective life of the product or longevity before it is declared as unfit for use. Repair is

not possible after this phase of life.

Serviceability (how easy is it to repair the product?)

Customer's view on quality is also influenced by how quickly and economically a repair or

routine maintenance activity can be accomplished. This is mentioned as serviceability. For

examples how long did it take to correct an error in your credit card statement by the bank?

Aesthetics (how appealing does the product look like?)

This is all about visual appeal of the product, often taking into account factors, such as style,

color, shape, packaging, tactile characteristics, and other sensory features.

Page 10: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Features (value or what does the product can actually do?)

Customers tend to purchase products that have more value added features. This can be beyond

basic criteria to enter into the market. A spreadsheet package may come with built-in statistical

quality control features while its competitors did not in the same price range. Feature may also be

definite as addition or secondary characteristics attached and supplements primary functionary of

a product. Thus, car stereo is a feature of an automobile whose primary function is transportation.

Perceived Quality (what is the customer’s feeling about the product after intended use?)

This is all about impression of a customer after using the product and/or service. This dimension

is directly influenced by any failures of the product that are highly visible to the public or the

way customer is treated when a quality-related problem with a product is addressed. Customer

loyalty and repeated business are closely related with perceived quality. For example, if you

make regular business trips by a particular airline, which almost always arrives late with few

incidence of luggage lost in transit, you will probably prefer not fly on that carrier and prefer its

competitor. So you will rate this dimension very low for such carrier.

Conformance to Standards (is the product made exactly as the designed

?)

This is what was discussed earlier as quality of conformance.

Service Quality

Service is generally defined as an experience felt by the consumer. Say, in a restaurant, the way

the customer is treated is considered as a service. Services are often intangible in nature. The

quality of service is judged by how well the customer is satisfied with the service. Service

quality is about comparing performance with the customer expectations. Service quality also

leads to customer satisfaction and interrelated. The key to retain customers is to understand their

needs and fulfill those needs. Making customers buy the services repeatedly requires focus on

dimensions of service quality. There are five dimensions of service quality and given below:

Page 11: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Tangibles: The tangible dimension of quality is related to the surroundings in which the service

is provided to the customers. In a restaurant, it may be seating arrangement, interior decoration

and lighting arrangement.

Reliability: Reliability refers to the dependability of customers on specific service. It is all about

what is promised and what is delivered. Like, Indigo airlines in India have proved to be low cost

airlines with high punctuality.

Responsiveness:Responsiveness refers to the time taken by a service provider to respond to

request. Like, LG customer care in India promises response to customer complaints within 24

hours.

Assurance: This dimension of service quality is related to the competence of the service

employee. The employees must be competent to gain the trust of customers.

Empathy: Empathy refers to caring attitude that an organization shows toward customer. This

dimension of service quality calls for individual attention to customer, so as to make them feel

special.

Considering the above dimensions, comparisons are made between actual service performance

and expectations of customers. The difference between customers’ expectations and actual

delivery (so-called ‘perception’) at the time of service performance is known as service quality

gap.Organization conduct survey and exploratory research to study the various service gaps, so

as to understand why the gap arises and how it can be reduced. Readers may refer Parasuraman

et al.(1985, 1988) paper for further details on Gap Models.

Page 12: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Module I. Introduction to Quality Management

Lecture 4 - What are the underlying quality philosophies suggested by Deming, Juran and

Crosby?

Several quality guru’s, such as W. Edwards Deming. Philip B. Crosby, and Joseph M. Juran, has

made significant contributions in the field of quality. They are largely responsible for the global

adoption and integration of quality management in industry. They preached that management

commitment is the key to a successful program in quality. They also emphasized that any philosophy

to improve quality in a company needs time and cannot occur overnight.

W. Edwards Deming is credited with the impressive turnaround in Japanese industry after World War

II. Deming's philosophy emphasizes the role of ‘management of the problems’ that industry

faces. Deming said that about 85% of the problem can be solved only by management. These

involve changing the method of operation and are not by scolding the workers. His idea was

to improve process and not on blaming or scolding workers. In Deming's world, workers'

responsibility lies in communicating to management the information they possess regarding

the process and both must work in harmony. The Deming's ideal management style is holistic

and organization is viewed as an integrated entity. The idea is to plan for the long run and

provide a course of action for the short run. Deming believed in the adoption of a total

quality philosophy and emphasized the never ending nature of statistical quality control in

the quality improvement process. Deming's approach demands a cultural transformation in

the organization. Deming advocated certain key components that are essential for the

journey toward continuous improvement, viz. Knowledge of the system and the theory of

optimization(to look into system as a whole and not as individual process), Knowledge of the

theory of variation( understanding common and special cause of variation and emphasize on

statistical process control), Exposure to the theory of knowledge(data driven prediction that

is based on underlying knowledge about processes), Knowledge of psychology (understand

the behavior and interactions of people and also the interactions of people with their

working environment). Deming provides 14 points for management that will sustain

productivity and competitiveness of the company in the long run. Book by Deming (published in

1982, 2000)or any book on Quality Management can be seen by readers to understand in-depth the

Page 13: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

14 points. In around 1950, Shewart cycle was renamed in Japan as Deming PDCA cycle. This is a

continuous cycle of process improvement. This is illustrated in Figure 1-2 given below.

Figure 1-2 PDCA Deming Cycle

Deming's 14 points for management provide a road map for continuous quality improvement.

While implementing these points, certain practices of management are

labeled by Deming as deadly diseases or sins. These are (i) management by visible figures only,

(ii) lack of constancy of purpose, (iii) performance appraisal by numbers, (iv) a short-term view

of organization, and (v) mobility of management. These must be eliminated. Most of Deming's

deadly diseases involve a lack of understanding of variation.

Joseph. M. Juran emphasized on seven step process for controlling quality, which is employed

by various organizations to control the processes.In this context, Juran first visited Japan in

1950’s,and educated the management of large organistions about the need of management’s

commitment to attain quality. The quality standards developed by Japanese are based on these

concepts. According to Juran philosophy,Quality is defined as “fitness for use”.

Page 14: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Philip B. Crosby has a particularly wide-ranging understanding of the various operations in

industry because he started as a line inspector and worked his way up. Such firsthand experience

has provided him with a keen awareness of what quality is, what the obstacles to quality are, and

what can be done to overcome them. He founded Philip Crosby Associates in 1979. His quality

management grid identifies and pinpoints operations that have potential for improvement. The

grid is divided into five stages of maturity, and six measurement categories aid in the evaluation

process. Readers can refer to his book ‘Quality is Free (1979)’ for in-depth about his philosophy.

He suggested that the rational quality improvement approach is to prevent defects. He defined

that the only performance standard is zero defect. Crosby emphasized on performance from the

cost of quality perspective. He preached to reduce costs of unquality, such as scrap, rework,

inventory, machine breakdown, inspection, etc. These are the cost that leads to poor quality.

Page 15: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Module I. Introduction to Quality Management

Lecture 5 -What do we mean by Quality Cost?

Quality costs are defined as those costs that are associated with the non-achievement of product

or service quality as defined by the requirements established by the organization and its contracts

(agreements) with customers. In simple terms, quality cost is the cost incurred by the firm

because of producing poor quality products. Measurement and analysis of various cost aids in

tracking the impact of an effective quality management system. Quality costs can be summed up

as costs of preventing of non-conformance of requirements, inspecting product/service for non-

conformances and failure in meeting specifications. The American Society for Quality Control

(1971) has defined four major categories for quality costs, which are provided below:

Prevention Costs

Prevention costs are incurred in planning, implementing, and maintaining of a quality practice. It

include salaries and developmental costs for process control approaches, information systems,

and all other costs associated with making the product right the first time. Also, costs associated

with education and training is included in this category. Defect identification and removal and

the cost of a quality audit are included in the prevention cost.

External Failure Costs

External failure costs are incurred when the product does not perform satisfactorily

after it is shipped to the end customer. If there are no defective units, the external failure cost can

be zero. However, cost incurred due to customer complaints, costs of investigation and

adjustments if required, and those associated with receipt, handling, repair (if possible), and

replacement of defective products comes within the external failure cost. Warranty cost (failure

of a product within the warranty time) which is specifically monitored in industries also fall

under this category.

Page 16: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Appraisal Costs

Appraisal costs are related with measuring, evaluating, or inspecting products, components, or

purchased materials to determine their degree of conformance to specified design standards.

Such costs include dealing with the inspection and test of incoming materials as well as product

inspection and testing at various stages of manufacturing till final acceptance. Appraisal costs are

associated with managing the outcome, whereas prevention costs are associated with managing

the goal.

Internal Failure Costs

Internal failure costs are incurred when products, sub assemblies, components or materials fail to

meet quality requirements prior to the transfer of ownership to the internal customer. These costs

will disappear if there were no defects or defective in the product while it is manufactured in-

house. Internal failure costs also include labor and overhead cost associated with any internal

repair.

Page 17: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Module II. Process Quality Improvement

Lecture -1 Why process quality improvement is important?

From operations management perspective, a process is any activity or group of activities that

takes one or more inputs, transforms them, and provides one or more outputs for its customers

(internal or external). The key to success in an organization is to understand how their processes

work to deliver the required outputs. Any process should add value and unnecessary waste

activities should be eliminated from the process steps, as per definition in Lean Management

philosophy. In the context of Quality Management philosophy, process is transformation of

inputs into output, which satisfies the required Quality Characteristics defined by the customers.

These characteristics are called ‘CTQ’s’ (Critical-to-Quality) or ‘responses’. The transformation

happens by controlling few vitals critical input and process variables (x1...xp) known as

controllable variables. These variables actually influence the mean and variance of the CTQ.

Thus proper setting of these variables is critical to get the best or optimal output.

However, there are other variables (z1…zm) which cannot be controlled, say room temperature,

humidity, or uneconomical to control. The variation caused in CTQ by these variables is

assumed to be the natural variability or chance cause variability. Taguchi emphasized to

minimize CTQ variability even in presence of these uncontrollable or noise variables by using

orthogonal array design based DOE.

Figure 2-1 Schematic diagram of a Process with Influential Variables

Uncontrollable Variables

z1 z2 zm

x1 x2 xp

CTQ (s) or

Responses (y)

Inputs

Controllable Variables

Page 18: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Determining the best setting for controllable variables is the primary focus of process quality

improvement activity. If process improves, we will get the best output or responses and as a

consequence best desired quality product for the end customer. Every company focuses on

process quality improvement so as to improve their prime competitive priority (or Quality).

Page 19: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Module – 2 Process Quality Improvement

What are the graphical and statistical techniques commonly used for understanding

current state of quality? What are the process quality monitoring, control and

improvement techniques?

Systematic solution approach to any quality improvement activity is critical and always

emphasized by quality gurus (Juran, Deming, and Shewart). Various tools and techniques are

commonly used to identify the critical control variables. The very basic techniques used in

quality management is 7 QC Tools, which consist of Pareto Diagram, Process Flow

Diagram, Cause and Effect Diagram, Check Sheets, Histogram, Run Charts, and Scatter

diagram. Additional statistical tools used are hypothesis testing, regression analysis,

ANOVA (Analysis of Variance), and Design of Experiment (DOE). In the following

section, we will go through each and every technique in a greater detail.

7QC TOOLS

Pareto Diagram

Alfredo Pareto (1848-1923) conducted extensive studies of distribution of wealth in Europe. He

found that there were a few people with a lot of money and majority of the people are having

little money in their hand. This unequal distribution of wealth became an integral part of

economic theory. Dr. Joseph Juran recognized this concept as a universal concept which can be

applied to many other fields. He coined the phrase ‘vital few and useful many’.

A Pareto diagram is a graph that ranks data (on say types of defects) in descending order from

left to right, as shown in Figure 2-2. In the diagram, data is classified as types of coating

machines. Other possible data classifications include problem, complaints, causes,

nonconformities types, and so forth. The vital few will come on the left of the diagram, and

useful many are on the right. It is sometimes worthy to combine some of the useful many into

one classification called "other". When this category is used, it is placed on the far right.

The vertical scale can be dollar value (or frequency), and percentage in each category is shown

on top of each bar. In this case, Pareto diagrams were constructed for both frequency and dollar

value. As can be seen from the figure, machine 35 has the greatest number of nonconformities,

Page 20: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

but machine 51 has the greatest dollar value. Pareto diagrams can be distinguished from

histograms (to be discussed) by the fact that horizontal scale of a Pareto diagram is categorical,

whereas the scale for histogram is numerical or continuous.

Figure2-2 Simple Pareto Diagram

Pareto diagrams are used to identify the most important problem type. Usually, 75% of the

problems are caused by 25% of the items. This fact is shown in the above figure, where coating

machines 35 and 51 account for about 75% of the total non-conformities.

Actually, most important items could be identified by listing them in descending order. However,

graph has an advantage of providing a visual impact, showing those vital few characteristics that

need attention. Construction of a Pareto diagram is very simple. There are five steps involved:

Step-1: Determine method of classifying data: by problem, cause, nonconformity, and so forth.

Step-2: Decide if dollars (best), frequency, or both are to be used to rank the characteristics.

Step-3: Collect data for an appropriate time interval or use historical data.

Step-4: Summarize data and rank order categories from largest to smallest.

Step-5: Construct the diagram and find the vital few problem area.

Page 21: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

The Pareto diagram is a powerful quality improvement tool to determine the most critical

problem to be considered first. The diagram can also provide cumulative % information and

given in many statistical software (say, MINITAB, http://www.minitab.com/en-

us/products/minitab/?WT.srch=1&WT.mc_id=SE001570) as shown below.

Frequency 74 57 52 34 33 23 11Percent 26.1 20.1 18.3 12.0 11.6 8.1 3.9Cum % 26.1 46.1 64.4 76.4 88.0 96.1 100.0

Defect Type OtherDentsDentPoor SealScratchFinishO-Ring

300

250

200

150

100

50

0

100

80

60

40

20

0

Freq

uenc

y

Perc

ent

Pareto Showing Cumulative Percentage of Defects

Figure2-3 Pareto Diagram with Cumulative %

Process Flow Diagram

For many products and services, it may be useful to construct a process flow diagram. Figure 2-

4 shows a simple process flow diagram for order entry activity of a make-to-order company that

manufactures gasoline filling station hose nozzles. These diagrams show flow of product or

service as it moves through various processing stages. The diagram makes it easy to visualize the

entire multistage process, identify potential trouble spots, waste activities, and locate control

points. It answers the question, "Who is our next customer?" Improvements can be accomplished

by changing (reengineering), reducing, combining, or eliminating process steps.

Page 22: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Telephone

Log in

Letter

Fax CreditCheck

ContactReview

Hold

InventoryCheck

ScheduleProduction Production

Notify Drawing

Figure 2-4 Process Flow diagram for an order entry activity

Standardized symbols

(http://www4.uwsp.edu/geo/faculty/gmartin/geog476/Lecture/flowchart_symbols.html) may be

used as recommended by industrial engineering and Lean Management (Value Stream Mapping,

http://www.strategosinc.com/vsm_symbols.htm) text book. In Six Sigma methodology, process

mapping is done by SIPOC (Suppliers-Inputs-Process-Outputs-Customer,

http://www.isixsigma.com/tools-templates/sipoc-copis/sipoc-diagram).

Cause and Effect Diagram

A cause-and-effect (C&E) diagram is a picture composed of lines and symbols designed to

represent a meaningful relationship between an effect (say Y) and its potential causes (say X).

Potential causes (which have evidence) are not all possible causes that come up in brain storming

exercise. It was developed by Dr. Kaoru Ishikawa in 1968, and sometimes referred to as the

‘Ishikawa diagram’ or a ‘fish bone diagram’.

C&E diagram is used to investigate either a "bad" effect and to take action to rectify the potential

causes or a "good" effect and to learn those potential causes that are responsible for the effect.

For every effect, there are likely to be numerous potential causes. Figure 2-5 illustrates a simple

C&E diagram with effect on right and causes on left. Effect is the quality characteristic that

needs improvement. Causes are sometimes broken down into major sub causes related to work

method, material, measurement, man (people), machinery (equipment), and environment (5M &

Page 23: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

1E). It is not necessary that every diagram will always have 5M and 1 E cause and can depends

also on the problem type. There can be other major causes in case of service-type problem.

Each major cause is further subdivided into numerous sub causes. For example, under work

methods, we might have training, knowledge, ability, physical characteristics, and so forth. C&E

diagrams are the means of picturing all these major and sub causes. The identified potential

causes considered critical (say 1, 2, 3, 4 and 5 as given in the below diagram) may be further

explored by experimentation to understand their impact on the house paint.

Figure 2-5: A Simple Cause and Effect Diagram

C & E diagrams are useful to

1) Identify potential causes and not all possible causes,

2) Analyze actual conditions for the purpose of product or service quality improvement

3) Eliminate conditions which cause nonconformities and customer complaints.

4) Statistical Experimentation, Decision-making and corrective-action activities.

C& E diagram can also be generated in MINITAB as shown below.

Page 24: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

FlawsSurface

Environment

Measurements

Methods

Material

Machines

Personnel

Testing

Mentors

O perators

Training (Inhouse)

Superv isors

Shifts

C ondition

A ccuracy

Speed

Lathes

Bits

Sockets

Suppliers

Lubricants

A lloy s

Erratic

Too slow

Brake

Engager

A ngle

Inspection procedure

Microscopes

Micrometers

Moisture C ontent (%)

C &E Diagram in MINITAB

Figure 2-6: Cause & Effect Diagram in MINITAB

Check Sheets

Main purpose of check sheets in earlier days is to ensure that data was collected carefully and

accurately by concerned personnel. Data is to be collected in such a manner that it can be quickly

and easily used and analyzed. The form of check sheet is individualized for each situation and is

designed by the project team. Figure 2-7 shows a check sheet for paint nonconformities for

bicycles.

Check sheets can also be designed to show location of defects. For example,

check sheet for bicycle paint non conformities could show an outline of a bicycle,

with ‘X’s indicating location of nonconformities. Creativity plays a major role in

design of a check sheet. It should be user-friendly and, whenever possible, include

information on location.

Page 25: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Figure 2-7: A Typical Check Sheet

Histogram

Histogram provides variation information of the characteristic of interest, as illustrated by

Figure 2-8. It suggests probability distribution shape of the sample observation and also

indicates possible gap in the data. Horizontal axis in Figure 2-8 indicates scale of measurement

and vertical axis represents frequency or relative frequency.

Page 26: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Figure 2-8. : Histogram

Histograms have certain identifiable characteristics, as shown in Figure 2-8. One characteristic of

the distribution concerns symmetry or lack of symmetry of the data. Is the data equally

distributed on each side of the center of measurement (e.g. Temperature), or it is skewed to right

or left? Another characteristic concerns the kurtosis of the data. A final characteristic concerns

number of modes, or peaks, in the data. There can be one mode, two modes (bi-modal) or

multiple modes.

Histograms can also provide sufficient information about a quality problem to provide a basis for

decision making without statistical analysis. They can also be compared in regard to location,

spread, and shape. A histogram is like a snapshot of the process showing variation in the

characteristic. Histograms can determine process capability, compare with specifications, suggest

shape of the population, and indicate any discrepancies in the data. A typical histogram using

MINITAB software is shown below.

Page 27: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

9085807570656055

14

12

10

8

6

4

2

0

Marks in Statistics Course

Freq

uenc

yHistogram

Figure 2-9. : Histogram of Marks in Statistics Course

Run Charts

A run chart, which is shown in Figure 2-10 is a very simple quality tool for analyzing process

with respect to (w.r.t) time in development stage or, for that matter, when other charting

techniques are not quite relevant. The important point is to draw a picture of the process w.r.t.

time and let it "talk" to you. Plotting time oriented data points is a very effective way of

highlighting any pattern observed w.r.t. time. This type of plotting should be done before doing

histogram or any other statistical data analysis.

Figure 2-10: Run Chart

Page 28: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

The horizontal axis in Figure 2-10 is labeled as time (Day of the Week), and vertical axis of the

graph represents measurement on variables of interest.

Scatter Diagram

The simplest way to determine if a relationship exists between TWO variables is to plot a scatter

diagram. Figure 2-11 shows a relationship between automotive speed and gas mileage. The

figure indicates that as speed increases, gas mileage decreases or a negative relationship exist

between the variables of interest. Automotive speed is plotted on x-axis and so-called

independent variable. The independent variable is usually controllable. Here, gas mileage is on

the y-axis and is the dependent or so-called response variable.

.

Figure 2-11 Scatter Diagram

There are a few simple steps for constructing a scatter diagram. Data is collected as ordered pairs

(x, y). The automotive speed is controlled and the gas mileage is measured. Horizontal and

vertical scales are constructed with higher values on right for x-axis and on the top for y-axis.

After the scales are labeled, data is plotted. Once the scatter diagram is complete, relationship or

Pearson correlation (http://en.wikipedia.org/wiki/Correlation_coefficient) between two variables

can be found out. In MINITAB, all relevant information can be derived using scatter plot

Page 29: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

(GRAPH-Scatter Plot)/correlation/regression option and shown with an example in Figure 2-12,

and Figure 2-13.

950900850800750700650

1400

1300

1200

1100

1000

Chicken Basket (X)

Cold

Dri

nks

Sale

s (Y

)Scatter Plot

Figure 2-12 Scatter Plot in MINITAB with positive correlation of 0.98

3837363534333231

280

260

240

220

200

180

Direction-East

Hea

tFlu

x

Scatter Plot with Week or No Relationship

Figure 2-13 Scatter Plot in MINITAB with week correlation of 0.1

Week correlation will not imply ‘no’ relationship. There may be nonlinear

relationship, which is not reflected by Pearson Correlation Coefficient.

Few other graphical plots extensively used in Quality Data analysis are Control Chart,

and Box Plot. These are discussed below.

Page 30: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Control Chart

Quality control is one approach that any organization adopts to detect defects and to take

corrective actions. Quality control is employed to ensure the desired level of quality in the final

goods and services. Quality control is about analysis of data for rectification of errors with

respect to time. Walter Shewhart developed the control charts in 1924. It focuses on monitoring

the performance of characteristic of interest over a period of time by looking at the variability in

the data. There are two broad categories of control charts: control charts for attributes and control

charts for variables. A variable control chart consists of a centre line (CL) that represents the

mean value of the characteristic of interest. In addition, two other horizontal lines, namely the

Upper Control Limit (UCL) and the Lower Control Limit (LCL), are also shown in the control

chart. A typical variable control chart on mean and range of a characteristic, so-called X-bar and

R is shown below.

2321191715131197531

3.100

3.075

3.050

3.025

3.000

Sample

Sa

mp

le M

ea

n

__X=3.0608

UC L=3.1145

LC L=3.0071

2321191715131197531

0.15

0.10

0.05

0.00

Sample

Sa

mp

le R

an

ge

_R=0.0525

UC L=0.1351

LC L=0

X-bar and R Control Chart for Paint Thickness characteristic

Figure 2-14 A Variable Control Chart

Sample mean ( x ) chart monitors the accuracy and central tendency of the process

output characteristic. Whereas, the sample range ( R ) chart monitor the variation of

the characteristic, with respect to time. The calculation details on UCL, CL and LCL

can be found in any Quality Management Book (Mitra, A, 2008; Montgomery, D.C.,

2008). In attribute type of control chart (used to monitor number of defects or defectives), only

Page 31: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

one chart is used to monitor deviation with respect to time. More details on control chart are

given in below section on statistical technique.

Box Plot

Box plot provide a display on quartiles, and outliers for a given data set. If we need to

compare variation of two data set (say two different service time), we may need the

help of Box plot at initial stage before going into inferential statistics and hypothesis

testing. A typical comparative Box Plot for two fast food restaurant is shown below.

KYCMacnalds

350

300

250

200

150

100

Dat

a

Box Plot of Two Restaurant Service Time

Figure 2-15 A Comparative Box Plot

Each box in the graph shows first quartile, second quartile and third quartile. The

extension line (Whisker) beyond the box is minimum of 1.5*(Inter quartile Range)

and extreme data point. ‘*’ beyond the whisker is considered as outlier.

It is observed that although the service time median of Macnalds and KYC seems

close, the variability of KYC data is much more than Macnalds. Thus it seems

Macnalds is more consistent on service time than KYC.

In addition to the above chart, stem and leaf plot

(http://www.youtube.com/watch?v=cOl-d3BERkM) and Multi-vari Chart

(http://en.wikipedia.org/wiki/Multi-vari_chart) are also useful in certain situations.

Page 32: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

This two are not discussed in detail and can be found in litratures, books, and web.

In process quality improvement, not only the 7 QC tools and plots are important, but

few statistical techniques are extensively used for inferential statistics and decision

making. Few of them are discussed below.

Statistical Techniques

Few important statistical techniques, frequently used in quality improvement and

decision making, include hypothesis testing, regression analysis, sampling technique,

two sample t-test, Analysis of Variance (ANOVA), and Design of Experiment

(DOE). These techniques are discussed briefly below.

HYPOTHESIS TESTING

Population parameters (say mean, variance) of any characteristic which are of relevance in most

of the statistical studies are rarely known with certainty and thus estimated based on sample

information. Estimation of the parameter can be a point estimate or an interval estimate (with

confidence interval). However, many problems in engineering science and management require

that we decide whether to accept or reject a statement about

some parameter(s) of interest. The statement which is challenged is known as a null hypothesis,

and the way of decision-making procedure

is so-called hypothesis testing. This is one of the most useful techniques

for statistical inference. Many types of decision-making problems in the engineering science can

be formulated as hypothesis-testing problems. If an engineer is interested in comparing mean of

a population to a specified value. These simple comparative experiments are frequently

encountered in practice and provide a good foundation for the more complex experimental

design problems that will be discussed subsequently. In the initial part of our discussion, we will

discuss comparative experiments involving either one or two populations, and our focus is on

testing hypothesis concerning the parameters of the population(s). We now give a formal

definition of a statistical hypothesis.

Page 33: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Definition

A statistical hypothesis is a statement about parameter(s) of one or more

populations.

For example, suppose that we are interested in burning rate of a solid propellant

used to power aircrew escape systems. Now burning rate is a random variable that can

be described by a probability distribution. Suppose that our interest focuses on mean

burning rate (a parameter of this distribution). Specifically, we are interested in deciding

whether or not the mean burning rate is 60 cm/s. We may express this formally as

µµ=

≠0

1

: 60 cm/s: 60 cm/s

HH

The statement µ =0 : 60 cm/sH is called the null hypothesis, and the statement

µ ≠1 : 60 cm/sH is so-called the alternative hypothesis. Since alternative

hypothesis specifies values of µ that could be either greater or less than 60 cm/s, it is

called a two-sided alternative hypothesis. In some situations, we may wish to formulate

a one-sided alternative hypothesis, as

µµ=

<0

1

: 60 cm/s: 60 cm/s

HH

Or µµ=

>0

1

: 60 cm/s: 60 cm/s

HH

It is important to remember that hypotheses are always statements about the population

or distribution under study, AND not statements about the sample. An experimenter generally

believes the alternate hypothesis to be true. Hypothesis-testing procedures rely on using

information in the random

sample from the population of interest. Population (finite or infinite) information is impossible

to collect. If this information is consistent with the null hypothesis then we will conclude that

the null hypothesis is true: however if this information is inconsistent with null hypothesis, we

will conclude that there is little evidence to support null hypothesis.

The structure of hypothesis-testing problems is generally identical in all engineering/science

applications that are considered. Rejection of

null hypothesis always leads to accepting alternative hypothesis. In our treatment of

Page 34: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

hypothesis testing, null hypothesis will always be stated so that it specifies an exact

value of the parameter (as in the statement µ =0 : 60 cm/sH ). The

alternative hypothesis will allow the parameter to take on several values (as in the statement

µ ≠1 : 60 cm/sH ). Testing the hypothesis involves taking a random

sample, computing a test statistic from the sample data, and using the test statistic

to make a decision about the null hypothesis.

Testing a Statistical Hypothesis

To illustrate the general concepts, consider the propellant burning rate problem introduced

earlier. The null hypothesis is that the mean burning rate is 60 cm/s. and the alternative is that

it is not equal to 60 cm/s. That is, we wish to test

µµ=

≠0

1

: 60 cm/s: 60 cm/s

HH

Suppose that a sample of n=10 specimens is tested and that the sample mean

burning rate X is observed. The sample mean is an estimate of the true population mean µ .

A value of the sample mean X that falls close to the hypothesized value of µ = 60 /cm s is

evidence that the true mean µ is really 60 cm/s: that is, such evidence supports the null

hypothesis 0H . On the other hand, a sample mean that is considerably different from 60 cm/s is

evidence in support of the alternative hypothesis, 1H . Thus sample mean is the test statistic in

this case.

Varied Sample may have varied mean values. Suppose that if ≤ ≤58.5 61.5x , we will

accept the null hypothesis µ =0 : 60H , and if either < 58.5x or >58.5 61.5x ,

we will accept the alternative hypothesis µ ≠1 : 60H . The values of X that are less than 58.5

and greater than 61.5 constitute the rejection region for the test, while all values that are in the

interval 58.5 to 61.5 forms acceptance

region. Boundaries between critical regions and acceptance region are so-called

‘critical values’. In our example the critical values are 58.5 and 61.5. Thus, we reject Ho in

favor of 1H if the test statistic falls in the critical region and accept Ho otherwise. This interval

or region of acceptance is defined based on the concept of confidence interval and level of

Page 35: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

significance for the test. More details on confidence interval and level of significance can be

found in various web link (http://en.wikipedia.org/wiki/Confidence_interval;

http://www.youtube.com/watch?v=iX0bKAeLbDo) and book (Montgomery and Runger,

2010).

Hypothesis decision procedure can lead to either of two wrong conclusions. For example, true

mean burning rate of the propellant could be equal to 60 cm/s. However, for randomly selected

propellant samples that are tested, we could observe a value of the

test statistic, X , that falls into the critical region. We would then reject the null hypothesis 0H

in favor of the alternative, 1H , when, in fact Ho is really true. This type of wrong conclusion is

called a Type I error. This is a more serious mistake as compared to another error Type II

explained below.

Now suppose that true mean burning rate is different from 60 cm/s, yet sample

mean X falls in the acceptance region. In this case we would accept 0H when it is false.

This type of wrong conclusion is called a Type II error.

Thus, in testing any statistical hypothesis, four different situations determine whether

final decision is correct or error. These situations are presented in Table 2- 1.

Because our decision is based on random variables, probabilities can be associated

with the Type I and Type II errors. The probability of making a Type I error

is denoted by Greek letterα . That is,

( ) ( )α = = 0 0Type I error Reject H |H is trueP P

Sometimes the Type I error probability is called the significance level (α) or size of the test.

Table 2-1 Type I and Type II Error

Actual Decision H0 is true H0 is false Accept H0 No error Type II error Reject H0 Type I error No error

The steps followed in hypothesis testing are

Page 36: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

(i) Specify the Null and Alternate hypothesis

(ii) Define level of significance(α) based on criticality of the experiment

(iii) Decide the type of test to be used (left tail, right tail etc.)

(iv) Depending on the test to be used, sample distribution, mean variance information, define

the appropriate test statistic (z-test, t-test etc.)

(v) Considering the level of significance (α), define the critical values by looking into standard

statistical tables.

(vi) Decide on acceptance or rejection of null/ alternate hypothesis by comparing test statistic

values calculated from samples, and as defined in step (iv), with standard value specified in

statistical table.

(vii) Derive meaningful conclusion.

Regression Analysis

In many situations, two or more variables are inherently related, and it is necessary to

explore nature of this relationship (linear or nonlinear). Regression analysis is a statistical

technique for

investigating the relationship between two or more variables. For example,

in a chemical process, suppose that yield of a product is related to process operating temperature.

Regression analysis can be used to build a model (response surface) to predict yield at a given

temperature level. This response surface can also be used for further process optimization, such

as finding the level of temperature that maximizes yield, or for process control purpose.

Let us look into Table 2-2 with paired data collected on % Hydrocarbon levels (say x variable)

and corresponding Purity % of Oxygen (say, y variable) produced in a chemical distillation

process. The analyst is interested to estimate and predict the value of y for a given level of x,

within the range of experimentation.

Page 37: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Table 2-2 Data Collected on % Hydrocarbon levels (x) and Purity % of Oxygen (y)

Observation number

Hydrocarbon level x (%)

Purity y (%)

Observation number

Hydrocarbon level x (%)

Purity y (%)

1 0.99 90.1 11 1.19 93.54 2 1.0 89.05 12 1.15 92.52 3 1.15 91.5 13 0.97 90.56 4 1.29 93.74 14 1.01 89.54 5 1.44 96.73 15 1.11 89.85 6 1.36 94.45 16 1.22 90.39 7 0.87 87.59 17 1.26 93.25 8 1.23 91.77 18 1.32 93.41 9 1.55 99.42 19 1.43 94.98

10 1.4 93.65 20 0.95 87.33

Scatter diagram is a firsthand visual tool to understand the type of relationship, and then

regression analysis is recommended for developing any prediction model. Inspection of this

scatter diagram (given in Figure 2-16) indicates that although no simple curve will pass exactly

through all the points, there is a strong trend indication that the points lie scattered randomly

along a straight line.

1.61.51.41.31.21.11.00.90.8

100

98

96

94

92

90

88

86

% Hydrocarbon Level(x)

% P

urit

y (y

)

Scatter Plot and Regression

Figure 2-16 Scatter diagram and Trend of Relationship

Page 38: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Therefore, it is probably reasonable to assume that mean of the random variable Y is related to x

by following straight-line relationship. This can be expressed as

( ) | 0 1| Y xE Y x xµ β β= = +

where, regression coefficient β0 is so-called intercept and β1 is the slope of the line. Slope and

intercept is calculated based on a ordinary least square method. While the

mean of Y is a linear function of x, the actual observed value y does not fall exactly on

a straight line. The appropriate way to generalize this to a probabilistic linear model is

to assume that the expected value of Y is a linear function of x. But for a particular value

of x, actual value of Y is determined by mean value from the linear regression model

plus a random error term,

β β ε= + +0 1Y x

where ε is the random error term. We call this model as simple regression

model, because it has only one independent variable(x) or regressor. Sometimes a model like this

will arise from a theoretical relationship. Many times, we do not have theoretical knowledge of

the relationship between x and y and the choice of the model is based on inspection of a scatter

diagram, such as we did with the oxygen purity data. We then think of the linear regression

model as an empirical model with uncertainty (error).

The regression option in MINITAB can be used to get all the results of the model. The results as

derived from MINITAB-REGRESSION option using oxygen purity data set is provided below.

The regression equation is % Purity (y) = 74.5 + 14.8 % Hydrocarbon Level(x) Predictor Coef SE Coef T P Constant 74.494 1.666 44.73 0.000 % Hydrocarbon Level(x) 14.796 1.378 10.74 0.000 S = 1.13889 R-Sq = 86.5% R-Sq(adj) = 85.7% Analysis of Variance Source DF SS MS F P Regression 1 149.55 149.55 115.30 0.000

Page 39: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Residual Error 18 23.35 1.30 Total 19 172.90

We look into R-Sq value and if it is more than 70%, we assume the relationship is linear and y

depends on x. More details on interpretation of other values are given in MINITAB help menu or

readers can refer to any standard statistical or quality management book.

In this context, P-value and its interpretation is important from the context of hypothesis testing

and regression analysis. General interpretation is that if the P-value is less than 0.05 (at 5% level

of significance test) the NULL HYPOTHESIS is to be rejected. Reader may refer web

(http://www.youtube.com/watch?v=lm_CagZXcv8;

http://www.youtube.com/watch?v=TWmdzwAp88k ) for more details on P-value and its

interpretation.

Common Abuses of Regression

Regression is widely used and frequently misused. Care should be taken in selecting variables

with which to construct regression equations and in determining form of a model. It is possible

to develop statistical relationships among variables that are completely unrelated in a

practical sense, For example, we might attempt to relate shear strength of spot welds

with number of boxes of computer paper used by information systems group. A

straight line may even appear to provide a good fit to the data, but the relationship is an

unreasonable one. A strong observed association between variables does

not necessarily imply that a causal relationship exists between those variables. Designed

experimentation is the only way to prove causal relationships.

Regression relationships are also valid only for values of the regressor variable within

range of original experimental/actual data. But it may be unlikely to remain so as we extrapolate.

That is, if we use values of x beyond the range of observations, we become less certain about the

validity of the assumed model and its prediction.

Page 40: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Process quality monitoring and control

Process quality is monitored by using acceptance sampling technique and control is achieved

through Statistical process control (SPC) chart. Monitoring is essential so as to utilize full

potential of the process. Statistical quality control is dating back to the 1920s. Dr. Walter A.

Shewhart of the Bell Telephone Laboratories was one of the early pioneers of the field. In 1924

he wrote a memorandum showing a modem control chart, one of the basic

tools of statistical process control. Dr. W. Edwards Deming and Dr. Joseph

M. Juran have been instrumental in spreading statistical quality-control methods since

World War II.

In any production process, regardless of how well-designed or carefully maintained it is,

a certain amount of inherent or natural variability will always exist. This natural variability or "

noise" is the cumulative effect of many small, essentially unavoidable causes. When the noise in

a process is relatively small, we usually consider it an acceptable level of process performance.

In the framework of statistical quality control, this natural variability is often called “chance

cause variability". A process that is operating with only chance causes of variation is said to be

in statistical control. In other words the chance causes are an inherent part of the process.

Other kinds of variability may occasionally be present in the output of a process, This variability

in key quality characteristics usually arises from sources, such as improperly

adjusted machines, operator errors, or defective raw materials. Such variability is generally large

when compared to the background noise and it usually represents an unacceptable level of

process performance. We refer these sources of variability that are not part of the chance cause

pattern as assignable causes. A process that is operating in the presence of assignable causes is

said to be out-of-control. There are varied statistical control chart to identify out of control

signal. Typically any control chart will have an upper and lower control limit and a central line

as given in Figure 2-17.

Page 41: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Figure 2-17 Control Chart Limits

There is a close connection between control charts and hypothesis testing. Essentially the control

chart is a test of the hypothesis that the process is in a state of statistical control.

There can be attribute (say monitoring defects or defective) control chart and variable control

chart. Variable control chart example is given earlier. A ‘c’ attribute chart monitors defects. A ‘c’

chart is shown below, where number of defect data in engine assembly are collected over period

of time. Thus at a particular time a sample engine assembly is selected and number of defects in

the assembly is recorded and monitored.

Page 42: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

191715131197531

9

8

7

6

5

4

3

2

1

0

Sample

Sam

ple

Coun

t

_C=3.2

UCL=8.5

LCL=0

c Type Control Chart

Figure 2-18 A c- type Attribute Control Chart and limit lines

P-chart is used to monitor defectives.

Details on various types of control chart and their specific application in varied situation can be

seen in many well known text books (Mitra, A, 2008; Montgomery, D.C., 2008) or web

(http://www.youtube.com/watch?v=gTxaQkuv6sU) .

The principles of control chart are based on acceptance sampling plans, provided by Harold F.

Dodge and Harry G. Romig, who are employees of Bell

System. Acceptance sampling plan is discussed in the following section.

Acceptance sampling plan

Acceptance sampling is concerned with inspection and decision making regarding product

quality. In 1930’s and 1940’s, acceptance sampling was one of the major components of quality

control, and was used primarily for incoming or receiving inspection.

A typical application of acceptance sampling is as follows: A company receives a

shipment of product from its vendor. This product is often a component or raw material used in

company's manufacturing process. A sample is taken from the lot, and some quality characteristic

of the units in the sample is inspected referring to specification. On the basis of information in this

sample, a decision is made regarding acceptance or rejection of the whole lot. Sometimes we refer

Page 43: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

to this decision as lot sentencing. Accepted lots are put into production; rejected lots are returned

to the vendor or may be subjected to some other lot-disposition action.

Although it is customary to think of acceptance sampling as a receiving inspection activity, there

are other uses of sampling methods. For example, frequently a manufacturer will sample and

inspect its own product at various stages of production. Lots that are accepted are sent forward for

further processing, and rejected lots may be reworked or scrapped.

Three aspects of sampling include:

1) Purpose of acceptance sampling is to take decision on acceptance of lots, not to estimate

the lot quality. Acceptance-sampling plans do not provide any direct form of quality

control.

2) Acceptance sampling simply accepts and rejects· lots. This is a post mortem kind of

activity. Statistical process controls are used to control and systematically improve quality

by reducing variability, but acceptance sampling is not.

3) Most effective use of acceptance sampling is not to "inspect quality into the product," but

rather as an audit tool to ensure that output of a process conforms to requirements.

Advantages and Limitations of Sampling Plan as compared to 100 % inspection

In comparison with 100% inspection, acceptance sampling has following advantages.

(i) It is usually less expensive because there is less inspection.

(ii) There is less handling of product, hence reduced damage.

(iii) It is highly effective and applicable to destructive testing.

(iv) Fewer personnel are involved in inspection activities.

(v) It often greatly reduces amount of inspection error.

(vi) Rejection of entire lots as opposed to the simple return of defectives

provides a stronger motivation to suppliers for quality improvement.

Acceptance sampling also has several limitations which include:

(i) There is risk of accepting "bad" lots and rejecting "good" lots, or Type I and Type II error.

Page 44: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

(ii) Less information is usually generated about the product.

Types of Sampling Plans

There are a number of ways to classify acceptance-sampling plans. One major

classification is by attributes and variables. Variables are quality characteristics that are measured

on a numerical scale whereas attributes are quality characteristics are expressed on a "go, no-go"

basis.

A single-sampling plan is a lot-sentencing procedure in which one sample

units is selected at random from the lot, and disposition of the lot is determined

based on information contained in that sample. For example, a single-sampling

for attributes would consist of a sample size n and an acceptance number c. The procedure would

operate as follows: Select n items at random from the lot. If there are

fewer than c defectives in the sample, accept the lot, and if there are more than c defective in the

sample, reject the lot.

Double-sampling plans are somewhat more complicated. Following an initial sample, a decision

based on the information in that sample is made either to accept

the lot, reject the lot or to take a second sample. If the second sample is taken,

information from both first and second sample is combined in order to reach a

decision whether to accept or reject the lot.

A multiple-sampling plan is an extension of the double-sampling concept, in that

more than two samples may be required in order to reach a decision regarding the disposition of

the lot. Sample sizes in multiple sampling are usually smaller than they are in either single or

double sampling. The ultimate extension of multiple sampling is sequential sampling, in which

units are selected from the lot one at a time, and following

inspection of each unit, a decision is made either to accept the lot, reject the lot, or select

another unit.

Random Sampling

Page 45: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Units selected for inspection from the lot should be chosen at random, and they

should be representative of all the items in the lot. Random-sampling concept is extremely

important in acceptance sampling and statistical quality control. Unless random samples are used,

bias may be introduced. Say, suppliers may ensure that units packaged on the top of the lot are of

extremely good quality, knowing that inspector will select sample

from the top layer. This also helps in identifying any hidden factor during experimentation.

The technique often suggested for drawing a random sample is to first assign a

number to each item in the lot. Then n random numbers are drawn (from random number table or

using excel/statistical software), where the range of

these numbers is from 1 to the maximum number of units in the lot. This sequence of

random numbers determines which units in the lot will constitute a sample. If products have serial

or other code numbers, these numbers can be used to avoid process

of actually assigning numbers to each unit. Details on different sampling plan can be seen in

Mitra, A (2008).

Process improvement Tools

Acceptance sampling and statistical quality control techniques may not significantly reduce

variability in the output. Process improvement by variation reduction is an important feature in

quality management. There are varieties of statistical tools available for improving processes.

Some of them are discussed below.

ANOVA

Many experiments involve more than two levels of a factor. Experimenter is interested to

understand the influence of the factor on variability of output characteristic. In this case,

Analysis of variance (ANOVA) is the appropriate statistical technique. This technique is

explained with the help of following example.

Say, a product development engineer is interested in investigating tensile strength of a new

synthetic fiber. The engineer knows from previous experience that the strength is affected by

weight percent of cotton used in the blend of materials for the fiber. Furthermore, he suspects that

increasing cotton content will increase the strength, at least initially. He also knows that cotton

Page 46: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

content should range of 1 to 25 percent if final product is to have other quality characteristics

that are desired. Engineer decides to test specimens at five levels of cotton weight percent: 5, 10,

15, and 20 percent. She also decides to test five specimens at each level of cotton content. This

is an example of a single-factor (Cotton Weight %) experiment with a (level) = 5 of the factor

and n = 5 replicates. The 25 runs should be made in random sequence.

Table 2-3 Experimental Data

Experimental run number Cotton weight percentage 1 2 3 4 5

5 7 8 15 11 9 10 12 17 13 14 19 15 14 18 19 17 16 20 25 22 23 18 20

The randomized test sequence is necessary to prevent the effects of unknown nuisance variables

(or hidden factor influence), perhaps varying out of control during experiment, from

contaminating the results. Balanced experiments with equal number of replicates are also preferred

to minimize the experimental error.

This is so-called a single-factor analysis of variance model and known as fixed effects model

as the factor level can be changed as required by the experimenter . Recall that yij, represents the

jth sample (j=1,..n, replicate) output observations under ith treatment combination. Let iy •

represent the average of the observations under the ith treatment. Similarly, let y•• represent the

grand total of all the observations and represent the grand average of all the observations.

Expressed symbolically,

1

1 1

i=1,2,...,a

n

i ij i ij

a n

ij ii i

y y y y n

y y y y N

=

⋅⋅ ⋅⋅= =

= =

= =

∑∑

where, N or 𝑎𝑎 ∗ 𝑛𝑛 is the total number of observations. The "dot" subscript notation used in

above equations implies summation over the subscript that it replaces.

The appropriate hypotheses are

Page 47: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

0 1 2

1

: 0: for at least one pair (i, j)µ µ µµ µ

= = = =

a

i j

HH

Page 48: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Decomposition of the total sum of squares

The name analysis of variance is derived from a partitioning of total variability into its

component parts. The total corrected sum of squares

( )2

1 1

a n

T iji j

SS y y⋅⋅= =

= −∑∑

is used as a measure of overall variability in the data. Intuitively, this is reasonable because, if we

were to divide SS, by the appropriate number of degrees of freedom (in this case, N -1), we would

have the sample variance of the y 's. The sample variance is of course a standard measure of

variability.

Note that the total corrected sum of squares SST may be written as

( ) ( )( )2

2

1 1 1 1

a n a n

T ij i ij ii j i j

SS y y y y y y⋅⋅ ⋅ ⋅⋅ ⋅= = = =

= − = − − ∑∑ ∑∑

or,

( ) ( ) ( ) ( )( )2 22

1 1 1 1 1 1 12

a n a a n a n

T ij i ij i i ij ii j i i j i j

SS y y n y y y y y y y y⋅⋅ ⋅ ⋅⋅ ⋅ ⋅ ⋅⋅ ⋅= = = = = = =

= − = − + − + − −∑∑ ∑ ∑∑ ∑∑

and as it is proved that the third product term vanishes, we can rewrite the overall expression of

SSTotal (SST) as

= +T Treatments ESS SS SS

Where, SSTreatments, is so-called the sum of squares due to treatments (i.e., between

treatments), and SSError, is called the sum of squares due to error (i.e., within treatments). There

are an N total observations: thus SST has N - 1 degrees of freedom. If there area levels of the

factor (and a treatment means), so SSTreatments, has a - 1 degrees of freedom. Finally, within

any treatment there are n replicates providing n - 1 degrees of freedom with which to

estimate the experimental error. Because there are a treatments, we have a(n - 1) = an - a = N -

a degrees of freedom for error.

Page 49: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Statistical Analysis

The Analysis of variance table (Table 2-4) for the single-factor fixed effects model is given

below

Table 2-4 ANOVA Table with formula

Source of

variance

Sum of squares Degrees of

freedom

Mean square F0

Between

treatments 2

1( )

Treatmentsa

ii

SS

n y y• ••=

= −∑

a-1 MSTreatments 0

Treatments

E

MSF

MS=

Error (within

treatments)

= −Error T TreatmentsSS SS SS

a(n-1) MSE

Total 2

1 1( )

a n

T iji j

SS y y••= =

= −∑∑

an-1

Because the degrees of freedom for TreatmentsSS and SSError add to N-1, the total number of

degrees of freedom, Cochran’s theorem implies that 2/TreatmentsSS σ and 2/ESS σ are

independently distributed chi-square random variables. Therefore, if the null hypothesis of no

difference in treatment means is true, the ratio

( )( )

/ 1/

Treatments Treatmentso

E E

SS a MSF

SS N a MS−

= =−

,

is distributed as F with a - 1 and N - a degrees of freedom. Equation above is the test statistic

for the hypothesis of no differences in treatment means.

From the expected mean squares we see that, in general, EMS is an unbiased estimator of 2σ .

Also, under the null hypothesis, TreatmentsMS is an unbiased estimator of 2σ . However, if the null

Page 50: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

hypothesis is false, the expected value of TreatmentsMS is greater than 2σ . Therefore, under the

alternative hypothesis, the expected value of the numerator of the test statistic (Equation given

above for Fo) is greater than the expected value of the denominator, and we should reject Ho

on values of the test statistic that are too large. This implies an upper-tail and one-tail critical

region. Therefore, we should reject Ho and conclude that there are differences in the treatment

means if

, 1,o a N aF Fα − −> ,

where, Fo is computed from above equation. Alternatively, we can also use the p-value approach

for decision making as provided by statistical softwares, say MINITAB, SAS.

Using MINITAB, we can obtain the following graphs and results for the above mentioned

experiment on tensile strength:

2015105

25

20

15

10

5

Weight percent of cotton

Tens

ile S

tren

gth

Boxplot of Tensile Strength

Figure 2-19 Box Plot of Data

From Box-plot it is observed that as cotton weight % increases tensile strength also improves.

However, whether any two means are significantly different cannot be commented based on

Box plot.

Page 51: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

5.02.50.0-2.5-5.0-7.5

99

95

90

80

70

60504030

20

10

5

1

Residual

Perc

ent

Normal Probability Plot(response is Tensile Strength)

Figure 2-20 Residual Plot The residual plot confirms Normality assumption of error. Conclusion in case error is non-

normal may be erroneous. Normality assumption can be tested by using Anderson-Darling test

statistic value provided in MINITAB (Stat->Basic Statistics-> Normality Test). One-way ANOVA Analysis: Tensile Strength versus Weight % of cotton Source DF SS MS F P Weight percent of cotton 3 340.15 113.38 14.77 0.000 Error 16 122.80 7.67 Total 19 462.95 S = 2.770 R-Sq = 73.47% R-Sq(adj) = 68.50% Individual 95% CIs For Mean Based on Pooled StDev Level N Mean StDev -----+---------+---------+---------+---- 5 5 10.000 3.162 (----*----) 10 5 15.800 3.114 (-----*----) 15 5 16.800 1.924 (-----*----) 20 5 21.600 2.702 (----*----) -----+---------+---------+---------+---- 10.0 15.0 20.0 25.0

Pooled St. Dev = 2.770

Figure 2-21 ANOVA Results in MINITAB

Page 52: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

ANOVA results given above confirm that changing the % weight influences tensile

strength and it is linear (from R-square value).

For determining the best setting of % cotton weight, one can do the Fisher LSD

comparison test as given in Figure 2-22.

Fisher 95% Individual Confidence Intervals All Pairwise Comparisons among Levels of Weight percent of cotton

Simultaneous confidence level = 81.11% Weight percent of cotton = 5 subtracted from: Weight percent of cotton Lower Center Upper ----+---------+---------+---------+----- 10 2.086 5.800 9.514 (----*-----) 15 3.086 6.800 10.514 (-----*----) 20 7.886 11.600 15.314 (-----*----) ----+---------+---------+---------+----- -7.0 0.0 7.0 14.0 Weight percent of cotton = 10 subtracted from: Weight percent of cotton Lower Center Upper ----+---------+---------+---------+----- 15 -2.714 1.000 4.714 (----*-----) 20 2.086 5.800 9.514 (----*-----) ----+---------+---------+---------+----- -7.0 0.0 7.0 14.0

Weight percent of cotton = 15 subtracted from:

Weight percent of cotton Lower Center Upper ----+---------+---------+---------+----- 20 1.086 4.800 8.514 (----*----) ----+---------+---------+---------+----- -7.0 0.0 7.0 14.0

Figure 2-22 Fisher Comparison Test

Page 53: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

The comparison tests confirm that cotton weight % 20 is significant ly different

from 15 %, and thus setting of 20 is suggested. Details on interpretat ion of

comparison test and ANOVA is given in MINITAB example, MINITAB help, and

any standard text book on Quality (Montgomery, D. C., 2014)

Designed Experiment

Statistical design of experiments refers to the process of planning an experiment so that

appropriate data which can be analyzed by statistical methods can be collected, resulting in

valid and objective conclusions. Statistical approach to experimental design is necessary if we

wish to draw meaningful conclusions from the data. This helps in confirming any causal

relationship. When problem involves data that are subject to experimental errors, statistical

methodology is the only objective approach for analysis. Thus, there are two aspects to any

experimental problem: design of experiment and statistical analysis of data.

Three basic principles of experimental design are replication, randomization and blocking

(local control). By replication we mean a repetition of basic trial on different sample. In the met-

allurgical experiment, twice replication would consist of treating two specimens by oil

quenching. Thus, if five specimens are treated in a quenching medium in different time point, we

say that five replicates have been obtained.

Randomization is the cornerstone underlying use of statistical methods in experimental design.

By randomization we mean that both allocation of experimental material and order in which

individual runs or trials of experiment are to be performed are randomly determined. Statistical

methods require that observations (or errors) be independently distributed random variables.

Randomization usually validates this assumption. By properly randomizing an experiment, we also

assist in "averaging out" the effects of extraneous (hidden) factors that may be present.

Blocking (or local control) nuisance variables is a design technique used to improve precision

with which comparisons among factors of interest are made. For example, an experiment in a

chemical process may require two batches of raw material to make all required runs. However,

there could be differences between batches due to supplier-to-supplier variability, and if we are

not specifically interested in this effect, we would think of different batches of raw material as a

Page 54: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

nuisance factor. Generally, a block is a set of relatively homogeneous experimental conditions.

There are many design options for blocking nuisance variables.

Guidelines for designing experiment

To use statistical approach in designing and analyzing an experiment, it is necessary for

everyone involved in the experiment to have a clear idea in advance of exactly what is to be

studied (objective of study), how data is to be collected, and at least an understanding of how

this data is to be analyzed. Below section briefly discusses on outline and elaborate on some of

the key steps in Design of Experiment (DOE). Remember that experiment can fail. However, it

always provides some meaningful information.

Recognition of a problem and its statement-This may seem to be rather obvious point, but

in practice it is often not simple to realize that a problem requiring experimentation exists, nor

is it simple to develop a clear and generally accepted statement of problem. It is necessary to

develop all ideas about objectives of experiment. Usually, it is important to solicit input from all

concerned parties: engineering, quality assurance, manufacturing, marketing, management,

customers (internal or external) and operating personnel.

It is usually helpful to prepare a list of specific problems or questions that are to be addressed by

the experiment. A clear statement of problem often contributes substantially to better

understanding of phenomenon being studied and final solution to the problem. It is also

important to keep overall objective in mind.

Choice of factors, levels, and range- When considering factors that may influence performance

of a process or system, experimenter usually finds that these factors can be classified as either

potential design (x) factors or nuisance (z) factors. Potential design factors are those factors that

experimenter may wish to vary during the experiment. Often we find that there are a lot of potential

design factors, and some further classification of them is necessary. Some useful classification is

design factors, held-constant factors, and allowed to-vary factors. The design factors are the

factors actually selected for study in the experiment. Held-constant factors are variables that may

exert some effect on the response, but for purposes of present experiment these factors are not of

Page 55: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

interest, so they will be held at a specific level.

Nuisance (allowed to-vary) factors, on the other hand may have large effects that must be

accounted for, yet we may not be interested in them in the context of the present experiment.

Selection of the response variable- In selecting response variable, experimenter should be

certain that this variable really provides useful information about process under study. Most

often, average or standard deviation (or both) of the measured characteristic will be response

variable. It iscritically important to identify issues related to defining responses of interest

and how they are to be measured before conducting the experiment. Sometimes designed

experiments are employed to study and improve the performance of measurement systems.

Choice of experimental design- If the pre-experimental planning activities mentioned

above are done correctly, this step is relatively easy. Choice of design involves consideration of

sample size (number of replicates) keeping in mind precision required for experiment, selection of

a suitable run order for the experimental trials and determination of whether or not blocking

restrictions are to be involved.

Performing the experiment- When running an experiment, it is vital to monitor the process

carefully to ensure that everything is being done according to plan. Errors in experimental

procedure or instrumental error during measurement at this stage will usually destroy

experimental validity. Up-front planning is crucial to success. It is easy to underestimate the

logistical and planning aspects of running a designed experiment in a complex manufacturing or

research and development environment. Coleman and Montgomery (1993) suggest that prior to

conducting experiment a few trial runs or pilot runs are often helpful.

Statistical analysis of the data-Statistical methods should be used to analyze the data so that

results and conclusions are objective rather than judgmental in nature. If the experiment has

been designed correctly statistical methods required are not elaborate. There are many excellent

software packages (JMP, MINITAB, and DESIGN EXPERT) to assist in data analysis. Often we

find that simple graphical methods play an important role in data analysis and interpretation.

Page 56: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Conclusions and recommendations- Once the data is analyzed, experimenter must draw

practical conclusions from the results, and recommend a course of action. Graphical methods

are often useful in this stage, particularly in presenting results to others. Follow-up runs and

confirmation testing should also be performed to validate the conclusions from the experiment.

There are many design options for statistical experiment. For two factor experiment, the basic

design is a two-way ANOVA. For higher number of factors, factorial design using orthogonal

array is typically used. There are also central composite designs (CCD), extremely useful to

identify higher order terms in the response surface model developed based on factorial design

(http://www.youtube.com/watch?v=Z-uqadwwFsU) . Three level Box–Behnken or BBD design

(http://www.itl.nist.gov/div898/handbook/pri/section3/pri3362.htm) is also very useful in

situation to identify quadratic terms and interaction in factorial design. Fractional factorial design

is recommended if more than 8 factors are to be studied and there is a need to reduce the number

of factors. This is also known as ‘screening experiment’. Sequential DOE or Response surface

design is used to reach to global optimal setting in case of unimodal function. Taguchi’s method

is a nonconventional approach used when there is little or no higher order interaction.

Desirability function and dual response optimization may be used in case there are multiple y’s

to be optimized simultaneously. Book by Montgomery, D.C. (2014) is an excellent reference to

learn DOE techniques.

Page 57: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Module II. Process Quality Improvement

Lecture – 3 How TQM principal is aligned with process quality improvement?

Total quality management (TQM) is a strategy for implementing and managing quality improvement

activities on an organization-wide basis. TQM began in early 1980s, with the philosophies of Edward

Deming and Joseph Juran as the focal point. It evolved into a broader spectrum of concepts and

ideas, involving participative organizations and work culture, customer focus, supplier quality

improvement, integration of quality system with business goals, and many other activities to focus all

elements of organization around quality and process improvement goal. Typically, organizations that

have implemented TQM approach to quality improvement have quality councils or high-level teams

that deal with strategic quality initiatives, workforce-level teams that focus on outline production or

business activities, and cross-functional teams that address specific process quality improvement

issues. TQM strongly emphasizes on variability reduction, the prime theme of process quality

improvement.

However, TQM has only moderate success for process improvement for a variety of reasons. Some

general reasons for lack of conspicuous success of TQM include (i) lack of top down, high-level

management commitment and involvement; (ii) inadequate use of statistical methods and insufficient

recognition of variability reduction as a prime objective; (iii) diffuse as opposed to focused, specific

objectives; and (iv) too much emphasis on widespread training as opposed to focused technical

education and actual implementation.

Some of the approaches which is extended and used from TQM philosophies are:

Quality Standards

International Standards Organization (ISO) has developed a series of quality standards including ISO

9000 series. Focus of these standards is quality system, including components such as management

responsibility for quality, design control, document and data control, purchasing and contract

management, product identification and traceability, inspection and testing, including control of

measurement and inspection equipment, process control, handling of nonconforming product,

corrective and preventive actions ,handling, storage, packaging, and delivery of product, service

activities, control of quality records ,internal audits, training and statistical methods.

Page 58: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Just-in-Time, Lean Manufacturing, Poka-Yoke, and Others

There are many initiatives devoted for improving production/manufacturing process. Some of

these include Just-in-Time approach emphasizing in-process inventory reduction, reduced set-up

time (SMED), and a pull-type production system; Poka-Yoke or mistake-proofing in the

processes; Toyota production system (TPS), reengineering; theory of constraints (TOC); agile

manufacturing; lean manufacturing; and so on.

Customer Focus

Most important asset of any organization is its customers. An organization's success

depends on how many customers it has, how much they buy, and how often they buy.

Satisfied customers buy more and more frequently. They also pay their bills promptly, which

greatly improves cash flow of an organization.

Increasingly, manufacturing and service organizations are using customer satisfaction as a

measure of quality. This fact is reflected in Malcolm Baldrige National Quality Award, where

customer satisfaction accounts for 30 percent of the total points. Similarly, customer satisfaction

standards are woven throughout the ISO 9000: 2008 standard. Customer satisfaction is one of

the major focuses of a effective quality management system.

Who are the Customers?

There are two distinct types of customers-external and internal. An external customer is the one

who uses the end product or service or one who purchases product or service. Whereas, internal

customer are always defined within the organization/interlinked processes (e.g. marketing

department may be an internal customer for production department).

Customer Perception of Quality

One of the basic concepts of TQM philosophy is continual improvement. This concept implies

that there is no acceptable quality level because the customer's needs, values, and expectations

are constantly changing.

Before making a major purchase, some people check consumer magazines that rate product

quality. During 1980 to 1988, quality of a product and its performance ranked first, price was

Page 59: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

second, and service was third. During 1989 to 1992, product quality remained the most

important factor, but service is ranked above price in importance.

An American Society for Quality (ASQ) survey on end user perceptions for important factors

that influenced purchases showed following ranking:

1. Performance

2. Features

3. Service

4. Warranty

5. Price

6. Reputation

The factors of performance, features, service, and warranty are part of the product or

service quality; therefore, it is evident that product quality and service are more important than

price. Although this information is based on retail customer, it appears, to some extent, to be true

for the organizational customer also.

Translating the Customer Needs into Requirements

Kano model, which is shown in Figure 2-23, conceptualizes customer requirements. The model

represents three major areas of customer satisfaction. The first area of customer satisfaction,

represented by diagonal line, represents explicit requirements. These include written or verbal

requirements and are easily identified, expected to be met, and typically performance related.

Satisfying customer would be relatively simple if these were the only requirements.

The second area of customer satisfaction represents innovations, as shown by

curved line in the upper left corner of the figure. A customer's written instructions are

often purposefully vague to avoid stifling new ideas during conceptualization and product

definition. Because they are unexpected, these creative ideas often excite and delight

the customer. These ideas quickly become expected w.r.t time.

The third and most significant area of customer satisfaction represents unstated or unspoken

requirements, as shown by the curve in the lower right corner of the Figure.

Page 60: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Figure 2-23 The Kano Model

The customer may indeed be unaware of these requirements, or they may assume that such

requirements will be automatically supplied. Basic specifications often fail to take real-world

manufacturing requirements into account; many merely are based on industry standards or past

practice. These implied requirements are most difficult to define but prove very costly if

ignored. They may be rediscovered during an after-the- fact analysis of lessons learned.

Realistically customer doesn’t buy a specification; customers buy a product or service to fulfill

his need. Peter Drucker once said, “Customers don’t buy products, they buy results”. Just

meeting a customer’s needs is not enough; organization must exceed customer’s needs.

Questioners are designed to identify basic, expected and exiting features according to Kano’s

Model and then translated into engineering requirement by using QFD.

Page 61: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Module II. Process Quality Improvement

Lecture – 4 How leadership influence process quality initiatives?

There is no universal definition of leadership and indeed many books have been devoted to the

topic of leadership. Researchers describe a leader as one who instills purposes, not one who

controls by brute force. A leader strengthens and inspires followers to accomplish shared goals.

Leader shapes, promotes, protects and exemplifies organization's values. Similarly, Daimler

Chrysler's CEO, Bob Eaton, defines a leader as "someone who can take a group of people to a

place they don't think they can go." As above definitions illustrate, leadership is difficult to

define in anything other than lofty words. The Malcolm Baldrige National Quality Award has a

more grounded definition of leadership in its core values. As stated in its core values and

concepts, visionary leadership is,

"An organization's senior leaders should set directions and create a customer focus, clear

and visible values, and high expectations. Directions, values, and expectations should

balance needs of all stakeholders. Leaders should ensure creation of strategies, systems, and

methods for achieving excellence, stimulating innovation, and building

knowledge and capabilities. Values and strategies should help guide all activities and

decisions of organization. Senior leaders should inspire and motivate entire workforce and

should encourage all employees to contribute, develop and learn, be innovative, and creative.

Senior leaders should serve as role models through their ethical behavior and their personal

involvement in planning, communication, coaching, development of future leaders, review of

organizational performance, and employee recognition. As role models, they can reinforce values

and expectations while building leadership, commitment, and initiative throughout your

organization."

Although leadership is difficult to define, successful quality leaders tend to have certain

characteristics as-

They give priority to external and internal customers and their needs.

Empower, rather than control, subordinates

Emphasize improvement rather than maintenance

Page 62: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Emphasize prevention.

Encourage collaboration rather than competition.

Train and coach, rather than direct and supervise.

Learn from problems.

Continually try to improve communication.

Demonstrate their commitment to quality.

Choose suppliers on the basis of quality, not price.

Establish organizational systems to support the quality effort.

Encourage and recognize team effort.

In order to improve process and system, leadership requires an intuitive understanding of human

nature-basic needs, wants, and abilities. To be effective, a leader understands that people

paradoxically need security and independence at the same time.

Leaders need to give their employees independence and yet provide a secure working

environment-one that encourages and rewards successes. A working environment must

be provided that fosters employee creativity and risk-taking by not penalizing mistakes. This is a

key part for process quality improvement.

A leader will focus on a few key values and objectives in the process. Focusing on a few values

or objectives gives the employees the ability to discern on a daily basis what is important and

what is not in the process. Employees, upon understanding the objectives, must be given

personal control over the process in order to make the task their own and, thereby, something to

which they can commit. A leader, by giving the employee a measure of control over an

important process, will tap into the employee's inner drive. Employees, led by the manager can

become excited participants in the organization.

Having a worthwhile cause such as total quality management is not always enough

to get employees to participate in process improvement. People follow a leader, not a

cause. If the leader is trusted and liked, then employees will participate in total quality

management causes.

Page 63: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Therefore, it is particularly important that a leader's character and competence, which is

developed by good habits and ethics, be above reproach. Effective leadership for improvement

begins on inside and moves out.

Ethics

Ethics is not a precept that is mutually exclusive from quality. Indeed, quality and ethics

have a common care premise, which is to do right things right.

Ethics is a body of principles or standards of human conduct that governs behavior

of individuals and organizations. It is about knowing what the right thing is. Ethics can mean

something different to different people, especially given

an organization's international workforce and varying cultural norms. Because individuals have

different concepts of what is right, leaders need to develop

standards or code of ethics. Quality is dependent on ethical behavior. Doing what is right in the

first place is a proven

way to reduce costs, improve process quality, and create higher customer satisfaction. Many

companies also hire ethics consultant to help them achieve their goal towards improvement.

Core Values, Concepts and Framework

Unity of purpose is the key to a leadership system. Core values and concepts provide that

unity of purpose. Core values and concepts enable a framework for leaders through-

out the organization to make right decisions. They foster TQM behavior and define

culture. Each organization needs to develop its own values. Given below are few core

values, concepts, and framework from Malcolm Baldrige National Quality Award.

They may be used as a starting point for any organization for quality improvement initiatives.

Visionary leadership

An organization's senior leaders need to set directions and create a customer orientation,

Page 64: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

clear and visible quality values, and high expectations. Values, directions, and expectations need

to address all stakeholders. The leaders need to ensure the creation of strategies, systems, and

methods for achieving process excellence. Strategies and values should help guide all activities

and decisions of the organization. The senior leaders must commit to the development of the

entire workforce and should encourage participation, learning, innovation, and creativity by all

employees. Through their personal roles in planning, communications, review or organization

performance, and employee recognition, the senior leaders serve as role models, reinforcing the

values and expectations, and building leadership and initiative throughout the organization.

Customer-Driven Excellence

Quality is judged by its customers. All product and service characteristics that contribute

value to the customer and lead to customer satisfaction is the focus of an organization's process

management system. Customer-driven excellence

has both current and future components: understanding today's customer desires and

marketplace offerings as well as future innovations. Value and satisfaction may be influenced by

many factors throughout the customer's overall purchase, ownership, and

service experiences. These factors include the organization's relationship with customers that

helps build trust, confidence, and loyalty. This concept of quality includes

not only the product and service characteristics that meet basic customer requirements, but it also

includes those features and characteristics that differentiate them

from competing offerings. Customer-driven quality is thus a strategic concept. It is directed

toward customer retention, market-share gain, and growth. It demands constant sensitivity to

changing the process and emerging customer and market requirements and the factors that drive

customer satisfaction and retention. It also demands awareness of developments in technology

and of competitors' offerings, and rapid and flexible responses to customer and market

requirements.

Agility

Success in global markets demands agility. Organizations face ever-shorter cycles for

introduction of new and improved products and services, as well as for faster

Page 65: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

and more flexible response to customers. Major improvements in response time often

require simplification of work units and processes and ability for rapid changeover

from one process to another. Cross-trained and empowered employees are vital assets in

such a demanding environment.

Managing for Innovation

Innovation means making meaningful change to improve an organization's products,

services and processes to create value for organization’s stakeholders. Innovation can lead an

organization to new dimensions of performance. Innovation is

no longer strictly the purview of research and development departments; innovation is important

for all aspects of business process. Organizations should be

led and managed so that innovation becomes part of organization culture.

Management by Fact

Organizations depend on measurement and analysis of process performance. Such measurements

should derive from business needs and strategy, and should provide critical data and information

about key processes, outputs, and results. Performance measurement should include customer,

product, and service performance; comparisons of operational, market, and competitive

performance; and supplier, employee, and cost and financial performance.

Systems Perspective

The Baldrige Criteria provide a systems perspective for managing an organization to

achieve performance excellence. The Core Values form building blocks and integrating

mechanism for the system. However, successful management of overall performance requires

organization-specific synthesis and alignment. Synthesis means

looking at an organization as a whole and building upon key business requirements, including

strategic objectives and action plans. Alignment means using key linkages

among requirements given in Baldrige Categories, including key measures/

indicators. Alignment includes using measures/indicators to link key strategies with

key processes and align resources to improve overall performance to satisfy customers.

Page 66: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Thus, a systems perspective means managing whole organization, as well as its

components to achieve success.

More details on Baldrige Categories, leadership and quality leadership can be seen in the book

by Besterfield et al. (2004), Evans (2005).

Page 67: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Module II. Process Quality Improvement

Lecture -5 How Lean and JIT are aligned with quality philosophy?

Just-in-time is a philosophy of continual improvement. Lean production process means

supplying the customer with exactly what the customer wants, when the customer wants it, without

waste, through continual improvement. Lean production is driven by "pull" system of the customer's

order. JIT is one of the key ingredient of lean production. When implemented comprehensive

manufacturing strategy, JIT and lean production sustain competitive advantage and result in greater

overall returns.

With JIT, components are "pulled" through the system to arrive where they are needed and

when they are needed. When units do not arrive just as needed, a "problem" is identified. This

makes JIT an excellent tool to help operations managers add value by driving out waste and

unwanted variability. Because there is no excess inventory or excess time in a JIT system, costs

associated with unneeded inventory are eliminated and throughput is improved. Consequently,

the benefits of JIT are particularly helpful in supporting strategies of rapid response at lower cost.

As elimination of waste and variability are fundamental to both JIT and lean production, a brief

explanation on both is provided below.

Waste: It is anything that does not add value to customers. In other words, customers are not

willing to pay for it. Products being stored, inspected or delayed, products waiting in queues, and

defective products which do not add value are waste. Moreover, any activity that does not add

value to a product from the customer's perspective is called waste. JIT provides faster delivery,

reduces work-in-process, and speedy throughput. Additionally, because JIT reduces work-in-process,

it provides little room for any errors, putting added emphasis on quality production. These waste

reduction efforts improves productivity and processes.

Variability Reduction: To achieve just-in-time material movement, managers reduce variability

caused by both internal and external factors in the process. Variability is any deviation in the

standard process to deliver perfect product on time, every time. Reducing inventory, less waste in

the system will ultimately reduce uncertainty. Most variability is caused by tolerating waste or by

Page 68: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

poor inventory management or ineffective process. Variability occurs because:

(i) Employees. machines, and suppliers produce units that do not conform to standards,

(ii) Engineering drawings or specifications are inaccurate. It is rare event although.

(iii) Customer’s exact demands are unknown and improper design.

Variability can often go unseen when inventory exists. JIT philosophy is aligned with continual

improvement by reducing such variability. The removal of variability allows us to move

materials just-in-time for use. J1T implementation can reduce throughput time in a supply chain.

Quality improves as uncertainty decreases and variability reduces.

Page 69: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Module II. Process Quality Improvement

Lecture -6 How benchmarking helps to improve process quality?

Benchmarking is a systematic method by which organizations can measure themselves against

the best industry practices. It promotes superior performance by providing an organized

framework through which organizations learn how the "best in class" can do things, understand

how these best practices differ from their own and implement change to close the gap.

Benchmarking Defined

Benchmarking is the systematic search for best practices, innovative ideas, and highly effective

operating practices. Benchmarking considers experience of others and uses it. Indeed, it is a

common-sense proposition to learn from others what they do right and then imitate it to avoid

reinventing the wheel. Benchmarking is not new and indeed has been around for a long time.

Infact, in the 1800s, Francis Lowell, a New England colonist, studied British textile mills and

imported many ideas along with improvements he made for the burgeoning American textile

mills. Benchmarking is used extensively by both manufacturing and service organizations,

including Xerox, AT&T, Motorola, Ford, and

Toyota. Benchmarking is a common element of quality standards, such as the

Chrysler, Ford, and General Motors Quality System Requirements.

Figure 2-24 Benchmarking Framework

Page 70: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

As shown in Figure 2-24, benchmarking measures performance against that of best-in- class

organizations (may be entirely different organization with different product or service),

determines how the best in class achieve those performance levels, and uses the information as

the basis for adaptive creativity and breakthrough performance.

Implicit in the definition of benchmarking are two key elements. First, measuring performance

requires some sort of units of measurement. These are called metrics and are usually expressed

numerically. The numbers achieved by the best-in-class are assumed benchmark or target. An

organization seeking improvement then plots its own performance against the target. Secondly,

benchmarking requires that managers understand why their performance differs. Benchmarkers

must develop a thorough and in-depth knowledge of both their own processes and the processes

of the best-in-class organization. An understanding of the differences allows managers to

organize their improvement efforts to meet the desired goal. Benchmarking is all about meeting

goals and objectives by improving processes.

Reasons to Benchmark

Benchmarking is a tool to achieve business and competitive objectives. It is powerful and

extremely effective when used for the right reasons and aligned with organization strategy. It is

not a panacea that can replace all other quality efforts or management processes. Organizations

must still decide which markets to serve and determine the strengths that will enable them to gain

competitive advantage. Benchmarking is one tool to help organizations develop those strengths

and reduce their weaknesses.

By definition, benchmarking requires an external orientation, which is critical in a competitive

world where the competitor can easily be on the other side of the globe. An external outlook

greatly reduces the chance of being caught unaware by the competition. Benchmarking can

notify the organization if it has fallen behind the competition or failed to take advantage of

important operating improvements developed elsewhere. In short, benchmarking can inspire

managers (and organizations) to compete. The primary weakness of benchmarking, however, is

the fact that best-in-class performance is a moving target.

Page 71: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Pitfalls and Criticisms of Benchmarking

The basic idea of benchmarking can be summed up quite simply. Find someone who executes a

process better than you do and imitate what he or she does. The most persistent criticism of

benchmarking comes from the idea of copying others. How can an organization be truly superior

if it does not innovate to get ahead of competitors? It is a question, but one can also ask the

reverse: How can an organization even survive if it loses track of its external environment?

Benchmarking is not a strategy, nor is it intended to be a business philosophy. It is an

improvement tool and must be used properly. Benchmarking isn't very helpful if it is used for

processes that don't offer much opportunity for improvement. It breaks down if process owners

and managers feel threatened or do not accept and act on findings. Over time, things change and

what was state-of-the-art yesterday may not be today. Some processes may have to be

benchmarked repeatedly.

Benchmarking is also not a substitute for innovation; however, it is a source of ideas from

outside organization. Benchmarking forces an organization to set goals and objectives based on

external reality. Consumers care about quality, cost, and delivery, and not productivity of the

organization.

Page 72: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Module II. Process Quality Improvement

Lecture – 7 In what way failure mode and effect analysis (FMEA) helps process

improvement initiative?

Like Design FMEA, as discussed elaborately in Module 3, process FMEA can also be performed

to remove any possibility of process failures. All failure modes, causes of failure(s), severity,

occurrence and detection rating are to be determined and reduction of RPN number by taking

corrective action is the standard procedure of preparing and following Process FMEA. More

discussion is given in QS 9000 (http://en.wikipedia.org/wiki/QS9000) document on process

FMEA (http://asq.org/learn-about-quality/process-analysis-tools/overview/fmea.html;

http://www.qualitytrainingportal.com/resources/fmea/fmea_process.htm). Reader may refer

Module 3 to understand the basic steps of FMEA and rating system.

Page 73: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Module II. Process Quality Improvement

Lecture – 8 How service quality concept is integrated with process quality improvement?

How it is different from concept of manufacturing process quality?

Service means many different things in different contexts as compared to product quality

(Garvin, 1984). For some it is synonymous with customer care, for others it is equivalent of

logistics function, or internal services such as accounting or personnel, for others it means

10,000 mile check-up to their car.

Despite more than 25 years of study, scholars in the field of services management do not agree

on ‘what a service is’. From customers’ perspective, service is a combination of customer

experience and their perception of outcome of service. An experience at a

park, for example, includes experience of rides, restaurants, emotions of enjoyment and

customer view of value for money at the end of the day. From manufacturing operations

perspective or definition, Figure2-25 shows the transformation process in service operation.

Figure 2-25 ServiceOperation: Service Transformation Process

Figure 2-26 Defining Service from Customer’s Perspective

It is important to note that customers also have to make an input to the service. These

customer inputs include their time and effort plus the financial cost (i.e. the price they pay

for the service) (refer Figure 2-26).

Customer and Staff

Material and Facilities

Equipments and Technology

Service Process

Goods and Services

Customer Time

Customer Effort

Financial Cost for Service

Experience+ Service Product

Value

Emotions

Judgments

Intensions

Page 74: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Service Quality

The term service quality is often used to mean different things. Some operations manager use the

term to mean how customer is treated. This is perhaps more accurately called quality of service,

as opposed to service quality, which can mean outcome and experience. Definitions of service

quality include:

Satisfaction

Sometimes service quality is used to mean same as satisfaction, i.e. perceived (experienced)

service quality.

An impression of the organization and its services

Service quality is more often used as a more enduring construct, whereas satisfaction is situation

and experience-specific. Satisfaction has to be experienced (refer Johnston and Clark, 2008),

whereas customers may have views about organization’s service quality without ever having

experienced the service. Service quality can also be expressed as consumer’s overall impression

of relative inferiority/superiority of organization services. Recent empirical work also suggests

that there is an interactive relationship between satisfaction and service quality, i.e. each can

have a moderating effect on other and on post-purchase intentions (Johnston and Clark, 2008).

Quality delivered

When we talk about service quality from an operation's perspective we usually mean the quality

of the service (may be in multiple stages) we deliver, i.e. does it consistently meet the

specification for that service? This, of course, may be different to how a customer sees the

service (their perceived service quality), and thus there may be a mismatch between a customer's

expectations of a service and their perception of its delivery. This mismatch could be the result

of either a mismatch between expectation and delivery and/or a mismatch between delivery and

perceptions which is a simplified version of the gap model developed by Parasuraman et al.

(1985).

Customers’ Expectations

Organizations need to understand expectations of customers and if appropriate, manage

those expectations (refer Figure 2-27). Indeed it may be appropriate to try to built-in

Page 75: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

customer’s expectations in order to keep them at the right level that can be met or just

exceeded by service delivery. This is a key challenge for service operations managers.

Figure 2-27 Expectation Scale

Customers expectation are influenced by various factors as illustrated in Figure 2-28.

Figure 2-28 Factors influencing expectation

Customer’s expectation and service quality factors

Price often has a large influence on expectation. Higher the prices, higher are customer

expectation. Alternative services available also help define and set expectations. Marketing

Page 76: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

can have a considerable influence on expectations. Marketing, branding and advertising

campaigns help set expectations.

Word-of-mouth marketing can have a profound effect

on customer expectations. Indeed, in some situations, word-of-mouth may

have a stronger influence than organizational marketing.

Customers' mood and attitude can affect the expectations. Someone in a bad mood or with a

poor attitude to an organization may have heightened expectations; someone less concerned and

more tolerant may have a wider zone of tolerance and thus has a wider range of expectations.

As observed, expectations are dynamic. Customers are continually experiencing many service

situations and their expectations are under continual review and change.

Service quality factors are those attributes of service about which customers may have

expectations and which need to be delivered at some specified level. Several sets of factors have

been identified (Parasuraman et al. 1985, 1988; Zeithaml et al., 1991) while explaining the

service quality gap model. There are 18independent quality factors (Johnston and Clark, 2008)

which try to capture totality of service quality. These factors include access, aesthetics,

attentiveness/helpfulness, availability, care, cleanliness/tidiness, comfort, commitment,

communication, competence, courtesy, flexibility, friendliness, functionality, integrity,

reliability, responsiveness and security.

Page 77: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Module II. Process Quality Improvement

Lecture-9 How six sigma philosophy is aligned with process quality improvement?

High-technology products with many complex components typically have many opportunities

for failure or defects to occur. Motorola developed the six-sigma program in the late 1980s as a

response to the demand for these products. The focus of six-sigma is reducing variability in key

product quality characteristics, or so-called CTQ, to the level at which defects are extremely

unlikely.

Figure 2-29 shows a normal probability distribution as a model for a quality characteristic with

the specification limits at six standard deviations on either side of the mean.

Figure 2-29 Normal probability distribution

Now it turns out that in situation when specification lines are at three standard deviation level,

the probability of producing a product within these specifications is 0.9973, which corresponds

to 2700 parts per million (ppm) defective. This is referred to as three-sigma quality performance.

In case we have a product that consists of an assembly of 100 components or parts and all 100 of

these parts must be no defects for the product to function satisfactorily. The probability that the

unit of product is having no defects is

Page 78: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

0.9973 x 0.9973 x ... x 0.9973 = (0.9973)100= 0.7631

That is, about 23.7% of the products produced under three-sigma quality will be defective. This

may not an acceptable situation, because many high-technology products are made up of

thousands of components. An automobile has about 200,000 components and an airplane has

several million.

The Motorola six-sigma concept is to reduce the variability in the process so that specification

limits are six standard deviations from the mean. Then, a

s given in Figure 2-30, there will only be about 2 parts per million defects. For six-sigma

quality, the probability that any specific unit of the hypothetical product is having defect is

0.9999998, or 0.2 defect parts per million.

When the six-sigma concept was initially developed, an assumption was made that when the

process reached the six-sigma quality level, the process mean was still subject to disturbances

that could cause it to shift by as much as 1.5 standard deviations off target. Under this scenario

(shown in Figure 2-30), a six-sigma process would produce about 3.4 ppm defects.

Figure 2-30 Shift in Mean

Table 2-5 PPM level vs sigma rating

Page 79: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Specification limit ±1 σ ±2 σ ±3 σ ±4 σ ±5 σ ±6 σ

Percent inside specs 30.23 69.13 93.32 99.379 99.9767 99.9966

ppm defective 697700 308700 66810 6210 2330 3.4

Considering shift in mean as natural phenomenon, the nonconforming and part per million

(PPM) is given in Table 2-5.

Six sigma also considers an important concept of opportunity to define the PPM level. Motorola

established six-sigma as both an objective for the corporation and as a focal point for process and

product quality improvement efforts. In recent years, six-sigma has spread beyond Motorola and

has come to encompass much more. It has become a program for improving corporate business

performance by both improving quality and paying attention to reducing costs. Companies

involved in a six-sigma effort utilize teams to work on projects that have both quality and

significant financial impact. The effort is better focused than in earlier TQM programs, and has

been more successful in obtaining management commitment. However, remember Deming's

point 10, which essentially says to eliminate slogans and programs to improve quality. There are

many programs including zero defects, value engineering, quality is free, TQM, and so forth,

which has failed due to improper implementation. A major component in successful quality

improvement is driving the use of the proper statistical and engineering tools into the

right places in the organization. A DMAIC (Define, Measure, Analyze, Improve and Control)

approach is used to implement six sigma philosophy. However, it is to be remembered that

statistical approach (say SPC, DOE) is the key theme for variation reduction in Six Sigma

philosophy. Some additional information on Six Sigma and Product Quality Improvement is

provided in Module 3.

Page 80: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Module II. Process Quality Improvement

Lecture – 10 How ISO 9001 and other standards used in Quality Management System

(QMS) influences process quality?

ISO is an international organisation for standardisation, which has been formed for the

development and issuing of international standards to be used across the world. Since its

inception, it has published more than 19,000 standards. Standardization actually helps in the

optimization of operations by proper utilization of resources. Earlier, when ISO started its

operations, it was working as International Federation of the National Standardising Associations

(ISA). But this organisation was dissolved during World War II. The acronym ISO is derived

from the Greek word “isos” which means “equal”. The members of ISO are the recognised

standard authorities, which also represents their respective nations. For example, American

National Standards Institute (ANSI) is the representative of the United States in ISO, and Bureau

of Indian standards is the representative of India. The structure of ISO is comprised of technical

committees, sub-committees and working groups.

The ISO 9000 Series of Standards is generic in scope. By design, the series can be tailored to fit

any organization's needs, whether it is large or small, a manufacturer or a service organization.

ISO 9000 series is developed to serve the quality aspects, which also include the eight principles

of management systems. It can be applied to construction, engineering, health care, legal, and

other professional services as well as the manufacturing of anything from nuts and bolts to

spacecraft. Its purpose is to unify quality terms and definitions used by industrialized nations and

use those terms to demonstrate a supplier's capability of controlling its processes. In very

simplified terms, the standards require an organization to say what it is doing to ensure quality,

then do what it says, and, finally, document or prove that it has done what it said. The main

reason behind establishing ISO standards is to ensure the required safety, quality, and reliability

of products and services. This can raises the levels of productivity and reduce the chance of

error.

The three initial standards of the series are:

ISO 9000:2000-Quality Management Systems (QMS)-fundamentals and vocabulary discusses

the fundamental concepts related to the QMS and provides the terminology used in the other,

Page 81: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

two standards.

ISO 9001:2000-Quality Management Systems (QMS)-requirements is the standard

used for registration by demonstrating conformity of the QMS to customers, regulatory, and the

organization's own requirements. ISO 9001:2008 is further developed with the aim of

establishing the requirements of quality management systems. The certification for ISO 9001

needs to be renewed by organization after a particular period as suggested by the certification

body. Generally, this period is three years.

ISO 9004:2000-Quality Management Systems (QMS)-guidelines for performance

improvement provides guidelines that an organization can use to establish a QMS

focused on improving performance. ISO 9004:2009 was created as the latest revision to replace

ISO 9004:2008 and was released in November, 2009.

The standard has eight clauses: Scope, Normative References, Definitions, Quality

Management Systems, Management Responsibility, Resource Management, Product

and/or Service Realization, and Measurement, Analysis, and Improvement. The first

three clauses are for information while the last five are requirements that an organization must

meet.

AS100

This aerospace industry quality system was officially released by the Society of Automotive

Engineers in May 1997. Its development and release represents the first attempt to unify the

requirements of NASA, DOD, and FAA, while satisfying the aerospace industry's business

needs. In March 2001, the International Aerospace Quality Group (IAQG) aligned AS9100 with

ISO 9001:2000.

QS 9000

The famous “Big Three” of US automobile sector, namely Chrysler, Ford and General Motors

had their own supplier development models and associated quality assurance systems initially.

However, in the late 80, a need was felt to develop a harmonized common model for the

suppliers to these big-three as suppliers were faced with the problem of complying to different

Page 82: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

quality assurance models for supplies made to different buyers. Accordingly a joint Automobile

Industry Action Group (AIAG) was set up to develop a harmonized supplier development and

quality assurance model, primarily meant for suppliers to the above mentioned automobiles

manufacturers and subsequently also to other automobile related industries throughout the world.

The basis used by the AIAG was the ISO-9000 series of standards in addition to the existing

individual quality assurance standards of Chrysler, Ford and General Motors. These are

Chrysler : Supplier Quality Assurance Manual

Ford : Q-101 Quality System Standard

General Motors : North American Operations Targets for Excellence

QS-9000 (http://en.wikipedia.org/wiki/QS9000) defines the fundamental quality system

expectations of Chrysler, Ford, General Motors, Truck Manufactures and other subscribing

companies for internal and external suppliers of production and service parts and materials.

These organizations are committed to working with suppliers to ensure customer satisfaction

beginning with conformance to quality requirements, and continuing with reduction of variation

and waste to benefit the final customer, the supply base, and themselves.

In addition, ISO 90003:2004 is another standard that is developed with the aim of improving the

quality of software-related products in terms of supply, development, maintenance and support

services. Whereas, ISO 13485 standards published states all the specifications required for a

comprehensive quality management system that helps in the design and manufacturing of

medical devices. As quality practices also influences society at large, ISO 14000 was created

with the aim of controlling the adverse effects to environment occurring due to the processes

followed by organizations. ISO 14001 standards are designed as the representation of all the

standards that are used for the successful implementation of Environmental Management

System.

All the standards for quality management discussed above talks about

process/system/environment improvement as the central theme and provide a general guideline

to achieve the goal.

Page 83: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management
Page 84: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Module II. Process Quality Improvement

Lecture – 11 What is the role of Audit in implementation of Quality Management System

(QMS)?

A QMS encompasses a whole set of activities, which need to be coordinated and controlled for

producing the desired quality in an organization. The objective of QMS is to improve the

performance of an organization and sustained improvements. In this context, internal audits

allow for checking the effectiveness of the installed quality systems from time to time. Internal

audits help in ensuring that the QMS conforms to the organizational quality policies as well as

any quality standard that it follows. Management review are also adapted by organization to

check the current capacity of QMS being implemented. These reviews may be conducted after

four to six months of QMS implementation. The review also helps to keep the track of adequacy

of the QMS. Before applying for ISO 9000 or any other international standard certification, a

pre-assessment audit is also done at organization level by external agency. Pre-assessment audit

helps the organisation understand where they stand with respect to quality requirements, and

provide an opportunity to correct issues which may come in the way of a successful international

standard certification application. Any kind of quality audit helps to maintain ‘what we record

and what we actually do’.

Page 85: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Module II. Process Quality Improvement

Lecture – 12 How Quality Awards, Quality council, Quality circle, Quality

Improvement Teams helps improve process quality?

Quality awards are prizes awarded for some aspect of quality performance that has been

demonstrated by an organization. Deming Prize was instituted by the Union of Japanese

Scientists and Engineers (JUSE) in 1951 to honour the contributions of W.E. Deming

towards quality control in Japan. This award is given to organization who adopted and

preached Total Quality Management principals successfully. Simililar in line, Malcolm

Baldrige National Quality Award (MBNQA)was instituted in USA in the year 1987. In India,

Bureau of Indian Standards constituted Rajiv Gandhi National Quality Award, to promote

excellence in Indian manufacturing and service organisations,in 1991.

There are seven parameters for evaluating any organisation for qualifing for any specific

award and marks are allocated for each one of the parameter and its sub-parameters. Any of

the quality award emphasis on strategic planning and links between strategic and quality

planning. Overall emphasis is on improving processes and system for better quality output.

In 1960, quality circles are formed in Japanese industries with the aim of improving the levels

of quality. This concept is still followed in many manufacturing and service industrial for

quality and process improvement. A quality circle is basically a participatory management

philosophy, in which a group of employees is formed to identify, analyse and solve quality

problems.The main objectives of quality circle is to identify quality issues, find out the root

causes, and solve the issues to improve process quality.

1

Page 86: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Module III Product Quality Improvement

Lecture – 1 How QFD helps in product quality improvement?

Quality Function Deployment (QFD) or the house of quality is the foundation to link the voice of

the customers with technical design requirements of a product. In other words, abstract

specifications required by the targeted customers are translated into specific product technical

requirements. Say in summer, customer needs a room to be cool and comfortable. However, how

much cool gives comfort to him/her is not specified. Take another situation, in which, a customer

wants hot coffee. Hot coffee is one of the ‘voice of the customer (VOC)’ [or ‘critical-to-quality

(CTQ) characteristic’] that the customer demands. He /She may not specify the temperature, but

the shopkeeper needs to identify best possible temperature setting for the coffeemaker machine.

The best setting will also differ according to weather conditions/ seasons. In order to translate a

VOC (say, comfort temperature range for AC), the AC machine designer must first experiment

and specify the feasible range of temperature setting (say 180C to 270C) for varied customers.

Providing varied temperature setting leads to flexibility in the design and helps different

customer to set different comfortable temperature at workplace/ home. There can be more than

one VOC, which can also be interacting. So, as the understanding on customer’s priorities /needs

(VOC) for a product becomes clearer and subsequently freezed, the designer attempts to translate

those into product technical requirements, so as to deliver the best tradeoff solution for

interacting VOC. The next test is to build a product prototype and check real life performance of

the machine. This is a continual design improvement process activity and finalizing a design may

require 30 to 40 prototype experimentation. Subsequently, the product design is approved for

pilot/full production. QFD is a structured framework to translate the VOC to technical

specification of a product. It is not an optimization tool, and does not provide any tradeoff

solution. It only guides the engineers towards developing a robust product design from the

customer’s perspective.

The structure of QFD can be thought of as a house (so-called ‘House of Quality’), and shown in

Figure 3-1.

Page 87: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Figure 3-1: House of Quality

The parts of the house of quality are described as:

The outside walls of the house are shown as the customer requirements and their priorities.

On the left side is a listing of VOC. On the right side is the prioritized customer requirement,

which is derived from customer survey. The ceilings of the house contain the technical

descriptors or requirements with expert’s priorities. The central or interior walls of the house

are the relationships between customer requirements and technical requirements. Customer

voices (customer requirements) are translated into engineering requirements (technical

descriptors).

The roof of the house is the interrelationship between independent technical requirements. Here

the trade-offs between similar and/or conflicting technical requirements are identified. The aim

of the house is to determine prioritized technical requirement. Technical benchmarking, reverse

engineering, tradeoff, and target value comparison are mostly used to determine technical

bounds.

This is the basic structure for the house of quality. However, based on this format varied QFD

Page 88: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

matrices are proposed.

Building a House of Quality

Quality function deployment starts with a list of goals/objectives. This list is often referred as the

WHATs that a customer needs or expects in a particular product. This list of primary customer

requirements is usually vague and very general in nature. Further definition is accomplished by

defining a new, more detailed list of secondary customer requirements needed to support the

primary customer requirements. In other words, a primary customer requirement may encompass

numerous secondary customer requirements.

Let us consider the development process of designing a handlebar stem for a bicycle.

Let us assume that there are two primary customer requirements, viz. aesthetics and

performance. The secondary customer requirements under aesthetics are affordable cost,

aerodynamic look, proper finish, and corrosion resistance. The secondary customer requirements

under performance are light weight, strength, and durability. This is illustrated in the QFD or

House of Quality diagram (Figure 3-2).

Page 89: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Figure 3-2 House of Quality of a handlebar stem in a bicycle

Page 90: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

As the customer needs and expectations are expressed in terms of customer requirements, the

QFD team needs to come up with engineering characteristics (HOW’s)

that will affect one or more of the customer requirements. Each engineering characteristic must

directly affect a customer perception (VOC) and be expressed in measurable terms.

Implementation of the customer requirements in design is difficult until they are translated into

counterpart technical characteristics. Counterpart technical characteristics are an expression of

the voice of the customer in technical language and specifications. For example, a customer

requirement for an automobile might be a smooth ride. This is rather an abstract statement,

which is important from the point of view of selling an automobile. Technical characteristics for

a smooth ride can be appropriate dampening, anti-roll, and stability requirements. These are the

primary technical descriptors or characteristics. Engineering knowledge and brainstorming

among engineering staff’s is a suggested method for determining technical characteristics.

Figure 3-3 shows the different technical requirements which can address all VOC for the bike

stem design.

Figure 3-3 Interrelationship between VOC and Technical Requirements

The next step in building a house of quality is to compare the VOC with technical characteristics

Page 91: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

and determine their interrelationships. In this context, engineering knowledge about the product

and historic evidence/ data can provide useful information. Common practice is to use symbols

to represent the nature of relationship between customer requirements and technical descriptors.

Symbols used are:

I. A solid circle represents a strong relationship (scored as +9).

II. A single circle represents a medium relationship. (scored as +3).

III. A triangle represents a weak relationship (scored as +1).

IV. The box is left blank if there is no relationship between VOC and technical

characteristics.

Figure 3-4 provides the interrelationship matrix with type of relationships. Any cell that is

empty implies no or insignificant relationship.

Figure 3-4 Complete Interrelationship between VOC and Technical Requirements

After drafting the relationship matrix, it is evaluated for any empty row or column. An empty

row indicates that a customer voice is not being addressed by any technical descriptors. Thus, the

customer expectation is not being met. Any blank column indicates that the technical

requirement is unnecessary, as it does not address any VOC.

Page 92: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

The roof of the house of quality, expressed as correlation matrix, is used to identify any

interrelationships between the technical descriptors (Figure 3-5). Symbols are used to describe

the strength of the interrelationships. Symbols generally preferred are:

I. A ‘solid circle’ represents a strong positive relationship.

II. A ‘circle’ represents a positive relationship.

III. An ‘X’ represents a negative relationship.

IV. An ‘asterisk’ represents a strong negative relationship.

Figure 3-5 Correlation Matrix and Tradeoff between Technical Requirements

Page 93: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

The symbols also describe the direction of the correlation. In other words, a strong positive

interrelationship means nearly perfect positive correlation. A strong negative will indicate nearly

perfectly negative correlation. This type of representation allows the user to identify which

technical characteristics support one another and which are conflicting. Conflicting technical

descriptors are extremely important because they are frequently the result of conflicting customer

requirements and, consequently, represent points at which tradeoffs must be made. Tradeoffs that

are not identified and resolved, while defining specification, will often lead to unfulfilled

requirements, unnecessary engineering changes, increase in cost, and poor quality from the

standpoint of customers. Some of the tradeoffs may require high-level managerial interventions,

because they cross functional boundaries.

An example of tradeoffs in the design of a car is customer requirements of

high fuel economy and safety. These two CTQ and technical descriptors are conflicting.

Addition of stronger bumpers, air bags, and antilock brakes will ultimately reduce the fuel

efficiency of the car.

The customer’s competitive assessment (Figure 3-6) is a pair of table (or graph) that depicts how

competitive products compare with current organization product status on specific VOC. The

customer competitive assessment is the block of columns corresponding to each

customer requirement in the house of quality on the right side of the relationship matrix,

The numbers 1 through 5 are listed in the competitive evaluation column to indicate a rating of 1

for worst and 5 for best. The customer competitive assessment is a good way to determine if the

customer voice has been met (as compared to best competitor) and identify areas to improvement

for future design.

Page 94: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Figure 3-6 Competative Assessment of VOC

The technical competitive assessment makes up a block of rows corresponding to each technical

descriptor in the house of quality beneath the relationship matrix. After respective technical

factors have been established, the products are evaluated for each technical factor that addresses

VOC.

Similar to the customer competitive assessment, the data recorded are in a scale of 1 through 5,

to indicate a rating, 1 for worst and 5 for best. The technical competitive assessment is often

useful in uncovering gaps in engineering judgment.

Page 95: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Importance ratings represent the relative importance of customer requirement in

terms of each other.

The target-value of column can be on the same scale as the customer competitive assessment (1

for worst, 5 for best can be used). This column is where the QFD team decides whether they

want to keep their product unchanged, improve the product, or make the product better than the

competitor.

The prioritized technical descriptors make up a block of rows corresponding to the technical

descriptor in the house of quality below the technical competitive assessment as shown in Figure

3-7. These prioritized technical descriptors contain target value and absolute weights.

Page 96: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Figure 3-7 Absolute Weights of Technical Requirements

The last rows of the prioritized technical descriptors are the absolute weight. A popular and easy

method for determining the weights is to assign numerical values to symbols in the relationship

matrix symbols. The absolute weight for the jth technical descriptor is given as

∑=

=n

iiijj cRa

1

Page 97: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Where,

aj = row vector of absolute weights for the degree of technical difficulty of technical

descriptors

(i = 1, ... , m)

Rij = weights assigned to the relationship matrix (i = 1, ... , n, j = 1, ... , m)

ci = column vector of importance to customer for the customer requirements

(i = 1, ... , n)

m = number of technical descriptors

n = number of customer requirements

The absolute weight for each technical descriptor is determined by taking the dot

product of the column in the relationship matrix and the column for importance to customer. For

instance, for aluminum (see Figure 3-7) the absolute weight is

(9x8+1x5+9x5+9x2+9x7+3x5+3x3) x1 =227.

The greater values of absolute weight indicate higher importance of the technical descriptor to

address VOC. These weights can be organized into a Pareto diagram to show which technical

characteristics are most important in meeting customer requirements.

In a corrosion problem, a Japanese car company Toyota, during 1960’s and 1970, there was huge

expense on warranty. The Toyota Rust QFD Study resulted in a virtual elimination of corrosion

warranty expenses. The customer requirement on durability was also achieved, with no visible

rust in following three years. It was determined that this could be obtained by including a

minimum paint film build, and maximum surface-treatment. The key process operation that

provides these part-quality characteristics consists of a three-coat process.

Page 98: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Module III Product Quality Improvement

Lecture – 2 What are component and system reliability and how it can be improved?

Reliability is a measure of the quality of the product over the long run. The concept of reliability

is an extended time period over which the expected operation of the product is considered and

we expect the product will function according to certain expectations over a stipulated period of

time. With the customer and warranty costs in mind, we must know the chances of successful

operation of the product for at least a certain stipulated period of time. Such information helps

the manufacturer to select the parameters of a warranty policy.

Technically, reliability is the probability of a product performing its intended function for a

stated period of time under certain specified conditions. Four aspects of reliability are apparent

from this definition first, reliability is a probability of success-related concept; the numerical

value of this probability is always between 0 and 1. Second, the functional performance of the

product had to be measured under certain stipulated conditions. Product design is expected to

ensure development of a product that meets or exceeds the specified requirements under

specified operating conditions. For example, if the breaking strength of a nylon cord is expected

to be 1000 kg, then in the predefined operational conditions, the cord must be able to bear

weights of 1000 kg or more. Third, reliability implies successful operation over a certain period

of time (t). Although no product is expected to last forever, the time dimension ensures

satisfactory performance over at least a minimal stated period (say, 100 hours). In the context of

these three aspects, the reliability of the nylon cord might be described as having a probability of

successful performance of 0.92 in bearing loads of 1000 kg for 1 year under dry conditions.

It is observed that most manufacturing products go through three distinct phases (see Figure 3-

8) from product inception to wear-out.

Page 99: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Figure 3-8 Bathtub Curve

The life-cycle curve of Figure 3-8 shows the variation in the failure rate as a

function of time in different phases. Conventionally the failure rate (λ ) is plotted as a function

of time. This curve is often referred to as the bathtub curve; it consists of the debugging (infant-

mortality) phase, the chance-failure phase (useful life phase), and the wear-out phase.

The debugging phase, also known as the infant-mortality phase, exhibits a drop

in the failure rate as initial problems identified during prototype testing are removed.

The chance-failure phase, between times t1 and t2, is then encountered; failures occur

randomly and independently. This phase, in which the failure rate is constant, typically

represents the useful life of the product delivered to end customer. In the wear-out phase, an

increase in the failure rate is expected due to wear and tear of the product. Here, after the end of

their useful life, parts age and wear out.

For the random chance-failure phase, which represents the useful life of the

product or component, the failure rate is assumed to be constant. As a result, the exponential

distribution is selected to describe the time-to-failure of the product for this phase.

Page 100: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

An exponential distribution as a memory less property and its probability density function is

given by

( ) λλ −= ≥, t 0tf t e

The mean-time-to-failure (MTTF) for the exponential distribution can be expressed as

MTTF=1/λ

The reliability, at time t, say R(t), is the probability of the product lasting up to time t. It can be

expressed as,

( ) ( )t

0

1

=1- t t

R t F t

e dt eλ λ− −

= −

=∫

Here, F(t) represents the cumulative distribution function at any time t. Reliability decreases

exponentially with time (Figure 3-9) and the failure-rate function, say r(t) , is given by the ratio

of the time-to-failure probability density function to the reliability function. We have

( ) ( )( )

f tr t

R t=

Figure 3-9 Reliability v/s Time

Page 101: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Thus, assuming an exponential distribution, r (t) implying a constant failure rate, as

shown below.

( )λ

λ

λ λ−

−= =

t

t

er te

Let us consider a resistor component, which follows an exponential time-to-failure distribution

with a failure rate of 8% per 1000 hr. We are interested to calculate the reliability of the resister

at 5000 hr, and also we intend to calculate the mean-time-to-failure. Here the constant failure

rate λ is obtained as

λ = 008 /1000 hr =0.00008/hr

Thus, the reliability for 5000 hr of survival is

( )( )( )- 0.00008 5000

-0.4

=e =e 0.6703

tR t e λ−=

=

Thus there is about 67% chance of survival (success) of the resister under stipulated conditions

and stipulated time (5000 hr). The mean (average) time-to-failure (assuming it cannot be repair)

of the resister will be

MTTF=1/ 1 /0.00008 12,500 hλ = =

System Reliability

Let us consider a system with three components (say three resister) in series as shown in Figure

3-10.

Figure 3-10 Series System

Page 102: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Without loss of generality, if the system components can be assumed to have a time-to-failure

distribution as exponential with each component has a constant failure rate, we can easily

compute the reliability of n-system in series. Suppose the system has n components and in series,

each with exponentially distributed time-to-failure with failure rates 1 2, , , nλ λ λ . The system

reliability is calculated as the product of the component reliabilities:

λ λ λ

λ

− − −

=

= × × ×

1 2

1

( )

=exp -

nt t tS

n

ii

R t e e e

t

This implies that the time-to-failure of the system is exponentially

distributed with an equivalent failure rate of 1

niiλ

=∑ . The mean time to failure for the system is

given by

1

1n

ii

MTTFλ

=

=

Systems with Components in Parallel

System reliability can be improved by placing redundant components in parallel. The system

operates as long as at least one of the components operates. A three component parallel system is

represented as given in Figure 3-11.

Page 103: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Figure 3-11 A Parallel Component System

If the time-to-failure of each component follows exponential distributions, each with a constant

failure rate, iλ , i = 1, ... , n, the system reliability, assuming independence of component

operation (failure of one does not impact failure of any other component), is given by

( )

( )λ−

= −

n

i=1n

i=1

( ) 1- 1 ( )

=1- 1 i

S i

t

R t R t

e

In a special case, where all components have the same failure rate, λ , the system

reliability for parallel component is given by

( )λ−= − −( ) 1 1 int

SR t e

For such specific situation, the mean-time-to-failure for the system with n identical components

in parallel, and also assuming that each failed component is immediately replaced by an identical

component, can be expressed as

1 1 1 11 ...2 3

MTTFnλ

= + + + +

Page 104: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Systems with Components in Series and in Parallel

Real life systems often consist of components that are mixed and consist of both series and

parallel configuration. For such system, reliability calculation is primarily based on the

previously discussed concepts, and assumption of components operating independently. Parallel

systems are first collated to get a composite reliability, and then the overall components are

considered as series to calculate system reliability. Systems can also consist of standby

component, which operates as and when base component fails. Reader may refer to book by

Amitava Mitra (2008) or Besterfield et al (2004 ) for further details on. K-out-of-N system

(parallel system is 1 out of N system) is another possible system configuration.

Page 105: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Module III Product Quality Improvement

Lecture 3-What is Design FMEA?

Continually measuring the reliability of a product is an essential

part of Quality. When creating a new product, or even modifying an existing product, it is

always necessary to improve the reliability of the product. One of the most powerful methods

available for improving the reliability of product is design FMEA. FMEA is an approach that

combines the technology and experience of people in identifying foreseeable failure modes of a

product and planning for its elimination. FMEA attempts to detect the potential product-related

failure modes. The approach is used to anticipate causes of failure and prevent them from

happening. It is like eliminating/preventing potential causes of failure in a cause and effect

diagram. This method can be implemented in both the product design and process design and

involves effect on both internal and the external customer.

FMEA uses an occurrence and detection probability criteria in conjunction with severity criteria

to develop a risk prioritization numbers for prioritization the corrective action. It is to be noted

that for FMEA to be successful, it is extremely important to treat the FMEA as a living record,

and continually changing as per new problem(s) and being updated to ensure that the most

critical problems are identified and addressed to prevent recurring.

A design (product) FMEA or process FMEA can provide the following benefits:

(i)Having a systematic review approach of component failure modes can ensure that any failure

produces minimal damage to the product or process.

(ii) Determining the effects that any failure will have on product or process and their functions.

(iii) Determining those critical parts of a product or a process whose failure will have critical

effects on product or process operation.

(iv) Eliminating or minimizing the adverse effects that failures could generate and indicating

safeguards to be incorporated if the product or the process cannot be made fail-safe or brought

within acceptable failure limits.

Page 106: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

(v) Help uncover oversights, misjudgments, and errors that may have been made.

It is to be noted that a FMEA document, however, cannot solve all design and process problems

and failures. The document, by itself, will not fix the identified problems or define the action

that needs to be taken. FMEA cannot also replace the basic root cause analysis approach.

FMEA Team

The FMEA approach is a team effort where the responsible engineer involves design,

manufacturing, materials, quality, service, supplier, and even the next customer (whether

internal or external). The team leader has certain responsibilities, which include coordinating

corrective action assignments and follow-up, keeping files and records of FMEA forms, leading

the team through completion of the forms, keeping the process moving, and finally, drawing

everyone into participation.

Details on FMEA Documentation

The concept of FMEA is nothing new to engineers. Engineers designing and building a product

have always incorporated the concepts of FMEA in their thinking process. However, FMEA

does help keep those ideas available for future use and for the use of others. One engineer may

find a potential problem elementary and not worth extra attention; a second engineer may not

realize the problem altogether. The purpose of the FMEA document (Please see Figure 3-12) is

to allow all involved engineers to have access to others' thoughts and to design and manufacture

using this collective group of thoughts. In this document, on the top right corner (see Figure 3-

12) is the FMEA Number. This number is only for record. There is also an item space to clarify

which exact component or process is being analyzed. The name and number of the system or

sub-system being analyzed is also mentioned in this space. Some of the critical headings

mentioned in FMEA document is discussed below.

Page 107: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Figure 3-12 Design FMEA Document

Design Responsibility

The team in charge of the design or process is identified in the space designated as Design

Responsibility. The name and department of the person or group responsible for preparing the

documentation is included here.

Prepared By

The name, telephone number, and address of the concerned persons (group) are included here so

as to contact them in case a part of the document needs further explanation.

FMEA Date

The date the FMEA was compiled and the latest revision date is included in this FMEA Date

space.

Page 108: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Item/Function

In this section, the name and part number of the item being analyzed is recorded. This

information avoids confusion involving similar items. Next, the function of the item is to be

entered below the description of the item. No specifics should be left out in giving the function

of the item. If the item has more than one function, they should be listed here. The function of

the item including the environment in which the system operates (say temperature, pressure, and

humidity) is also recorded here.

Potential Failure Mode

The Potential Failure Mode information may be one of two things. First, it may be the way in

which the item may fail to meet the design criteria. Second, it may be a potential failure in a

higher-level system or may be the result of failure of a lower-level system. It is important to

consider and list each and every potential failure mode. A possible starting point when listing

potential failure modes is to consider past failures. Also, the potential failure modes must be

described in technical terms. Some typical failure modes may include ‘cracked or deformed,

loosened joints, leakage from welding, short circuit in water heater, and fractured.'

Potential Effect(s) of Failure

The potential effects of failure are the effects of the failure as perceived by the internal or

external customer. The effects of failure must be described in terms of what the customer will

notice or experience. It is also stated whether the failure will impact personal safety or violate

any product regulations. This section of the document must also forecast what effects the

particular failure may have on other subsystems in immediate contact. Some typical effects of

failure may include engine noise and poor appearance.

Page 109: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Severity (S)

Severity is the assessment of the seriousness of the effect of the potential failure mode

to the subsequent component, sub-system, or customer. It is to be emphasized that the severity

applies only to the effect of the failure, not the potential failure mode. Severity rating must not

change from any reasoning except change in the product design. Severity is rated on a 1-to-10

scale, with a 1 being least severe and a 10 being the most severe. Rating criteria is given in

Table 3-1. Readers may also refer QS 9000 (http://en.wikipedia.org/wiki/QS9000), which

provides further details on severity rating.

Classification (Class)

This column is used to classify any special characteristics for components that may require

additional controls.

Potential Cause(s)/Mechanism(s) of Failure

Every potential failure cause is to be listed completely and concisely. Some failure modes may

have more than one cause and/or mechanism of failure. Typical failure causes may include

incorrect product specification, inadequate design, over-stress, poor environment protection.

Typical failure mechanisms may be creep, fatigue, wear, and corrosion.

Occurrence (0)

Occurrence is the possible chance that one of the specific causes/mechanisms will occur. This is

done for every cause and mechanism listed. Reduction or removal in occurrence ranking must

not come from any reasoning except for a direct change in the design or process. Change is the

only way a reduction in the occurrence ranking can be affected. The likelihood of occurrence is

based on a 1-to-10 scale, with 1 being the least chance of occurrence and 10 being the highest

chance of occurrence. A reference on occurrence rating is given in Table 3-2.

Page 110: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Table 3-1 Severity Rating Reference

Effect Criteria: Severity of Effect Ranking

Hazardous Without warning

Very high ranking when potential failure mode affects safe operation and/or regulation noncompliance. Failure occurs without warning.

10

Hazardous With warning

Very high ranking when potential failure mode affects safe operation and/or regulation noncompliance. Failure occurs with warning.

9

Very High Item or product is inoperable, with loss of function. Customer very dissatisfied.

8

High Item or product is operable, but with loss of performance. Customer dissatisfied.

7

Moderate Item or product is operable, but with loss to comfort/convenience items inoperable. Customer experiences discomfort.

6

Low Item or product is operable, but with loss of performance of comfort/convenience items. Customer has some dissatisfaction.

5

Very Low Certain item characteristics do not conform. Noticed by most customers.

4

Minor Certain item characteristics do not conform. Noticed by average customer.

3

Very Minor Certain item characteristics do not conform. Noticed by discriminating customers.

2

None No Effect. 1

Page 111: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Table 3-2 Occurance Rating Reference

Probability of Failure Possible Failure Rates Ranking

Very High: Failure is

Almost Inevitable.

>1 in 2 10

1 in 3 9

High: Repeated Failures

1 in 8 8

1 in 20 7

1 in 80 6

1 in 400 5

1 in 2000 4

Low: Relatively Few

Failures

1 in 15,000 3

1 in 150,000 2

Remote: Failure is

Unlikely <1 in 1,500,000 1

Current Design Controls

In order to improve the occurrence rating for the particular failure mode, the design control must

be employed. Current Design control indicates the state of control that will be able to detect the

occurance of a failure or minimize the failure chances.

Detection (D)

This is a relative measure of assessment of the ability of the design control to detect either a

potential cause/mechanism or the subsequent failure mode before the component goes to

end/next user. Typically, in order to achieve a lower detection rating, design control must be

improved. A reference for rating in detection phase is given in Table 3-3.

Page 112: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Table 3-3 Rating of Likelihood of Detection in Design FMEA

Rankings of likelihood of detection by Design Control for Design FMEA

Effect Criteria : severity of Effect Ranking Absolutely Impossible

Design control will not and / or cannot detected a potential cause / mechanism and subsequent failure mode : or there is no design control

10

Very remote

Very remote chance the design control will detected a potential cause /mechanism subsequent failure mode

9

Remote Remote chance the design control will detect a potential cause / mechanism and subsequent failure mode.

8

Very low Very low chance the design control will detect a potential cause /mechanism and subsequent and failure mode

7

Low Low chance the design control will detected a potential cause / mechanism and subsequent failure mode

6

Moderate Moderate chance the design control will detect a potential cause / mechanism and subsequent failure mode

5

Moderate highly

Moderately high chance the design control will detect a potential cause / mechanism and subsequent failure mode

4

High High chance the design control will detect a potential cause / mechanism and subsequent failure mode

3

Very High Very high chance the design control will detect a potential cause /mechanism and subsequent failure mode

2

Almost certain

Design control will almost certainly detect a potential cause / mechanism and subsequent failure mode

1

Risk Priority Number (RPN)

The Risk Priority Number is the product of the severity (S), occurrence (0), and detection (P)

rankings. This product may be viewed as a relative measure of the design risk. Values for the

RPN can range from 1 to 1000, with 1 being the smallest design risk possible. This value is then

used to rank order the various causes of failure in the design. For causes with a relatively high

RPN, the engineering team must make efforts to take corrective action to reduce the RPN. Any

score above 50 may be considered as cutoff to eliminate/minimize the impact of a particular

cause. However, because a certain concern has a relatively low RPN (<50), the FMEA team

Page 113: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

should not overlook the concern and neglect an effort to reduce the RPN. This is especially true

when the severity of a concern is high. In such case(s), a low RPN may be extremely misleading,

not placing enough importance on a concern where the level of severity may be disastrous. In

general, the purpose of the RPN is to rank the various causes on the record. However, every

cause should be given full priority by the team, and the team should look for every method

available to reduce the RPN.

Recommended Actions

After every concern has been examined and given a risk priority number, the team should begin

to examine the corrective action(s) that may be employed, beginning with the concern with the

greatest RPN and working in descending order according to RPN. Also, concerns with high

severity should be examined. The purpose of the recommended actions is to reduce one or more

of the rating that constitute the risk priority number. An increase in design validation actions will

result in a reduction in only the detection ranking. Only removing or controlling one or more of

the causes/mechanisms of the failure mode through design revision can effect a reduction in the

occurrence ranking. And only a design revision can bring about a reduction in the severity

ranking. Some actions that should be considered when attempting to reduce the three rankings

include, but are not limited to: design of experiment (DOE), revised test plan, and revised

design.

Responsibility and Target Completion Dates

Here the individual or group responsible for the recommended actions and the target completion

date should be entered as reference for future record.

Actions Taken

After a corrective action has been implemented, a brief description of the action and its effective

date is entered. This is done after the action has been implemented so future users can track the

progress of the plan.

Page 114: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Resulting RPN

After the corrective actions have been identified, the resulting severity, occurrence, and

detection rankings should be re-estimated. Then the resulting RPN should be reo calculated and

recorded. If no actions are taken, this section should be left blank. If no actions are taken and the

prior rankings and RPN are simply repeated, future users may reason that there were

recommended actions taken, but that they had no effect. After this section is completed, the

resulting RPNs should be evaluated, and if further action is deemed necessary, steps from the

recommended actions section can be repeated.

The overall objective of Design FMEA is to improve the design, improve product reliability, and

reduce the chances of occurrence of failures. Design of Experiment (DOE) is recommended by

various researchers to improve the quality of design. One of the DOE approach is so-called

‘Robust Design’, originally proposed by Genechi Taguchi in 1980. A bried detail on his concept

is discussed below.

Page 115: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Module III Product Quality Improvement

Lecture – 4 What is robust design?

Dr. Genichi Taguchi, a mechanical engineer, who has won four times Deming Awards,

introduced the loss function concept, which combines cost, target, and variation into one metric.

He developed the concept of robustness in design, which means that noise variables (or nuisance

variables or variables which are uneconomical to control) are taken into account to ensure proper

functioning of the system functions. He emphasized on developing design in presence of noise

rather than eliminating noise.

Loss Function

Taguchi defined quality as a loss imparted to society from the time a product is shipped to

customer. Societal losses include failure to meet customer requirements, failure to meet ideal

performance, and its harmful side effects.

Assuming the target [tau (τ )] is correct, losses are those caused by a product's critical

performance characteristics, if it deviates from the target.The importance of concentrating on

"hitting the target" is shown by Sony TV sells example. In spite of the fact that the design and

specifications were identical, U.S. customers preferred the color density of shipped TV sets

produced by Sony-Japan over those produced by Sony-USA. Investigation of this situation

revealed that the frequency distributions were markedly different, as shown in Figure 3-13. Even

though Sony-Japan had 0.3% outside the specifications, the distribution was normal and centered

on the target with minimum variability as compared to Sony-USA. The distribution of the Sony-

USA was uniform between the specifications with no values outside specifications. It was clear

that customers perceived quality as meeting the target (Sony-Japan) rather than just meeting the

specifications (USA). Ford Motor also had a similar experience with their transmissions.

Page 116: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Figure 3-13 Distribution of color density for Sony-USA and Sony-Japan Out of specification is the common measure of quality loss in Goal post mentality [Figure 3-14

(a) ]. Although this concept may be appropriate for accounting, it is a poor concept for various

other areas. It implies that all products that meet specifications are good, whereas those that do

not are bad. From the customer's point of view, the product that barely meets the specification is

as good (or bad) as the product that is barely just out-of-specification. Thus, it appears that wrong

measuring system for quality loss is being used. The Taguchi’s loss function [Figure 3-14 (b)]

corrects for the deficiency described above by combining cost, target, and variation into one

single metric.

Figure 3-14(a): Discontinuos Loss Function (Goal Post

Mentality)

Page 117: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Figure 3-14(b): Continuous Quadratic Loss function (Taguchi Method)

Figure 3-14(a) shows the loss function that describes the Sony-USA situation as per ‘Goal Post

Mentality’ considering NTB (Nominal-the-Best)-type of quality charecteristic. Few performance

characteristics considered as NTB are color density, voltage, bore dimensions, surface finish. In

NTB, a target (nominal dimension) is specified with a upper and lower specification, say

diameter of a engine cylinder liner bore. Thus, when the value for the performance characteristic,

y, is within specifications the quality loss is $0, and when it is outside the specifications the loss

is $A. The quadratic loss function as shown in Figure 3-14(b) describes the Taguchi method of

definining loss function. In this situation, loss occurs as soon as the performance characteristic, y,

departs from the target, τ .

The quadratic loss function is described by the equation

( )2L k y τ= − , Where L = cost incurred as quality deviates from the target (τ )

y is the performance characteristic, k = quality loss coefficient.

The loss coefficient is determined by setting

( )2 2/ /k A y Aτ= − = ∆

Page 118: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Assuming, the specifications (NTB) is10 ± 3 for a particular quality characteristic and the

average repair cost is $230, the loss coefficient is calculated as,

2 2/ 230 /3 25.6k A= ∆ = =

Thus, L = 25.6 (for y= 10) and at L=102.4 (for y = 12),

( )( )

2

2

25.6 10

=25.6 12 10 =$102.40

L y= −

Average or Expected Loss

The loss described above assumes that the quality characteristic is static. In reality, one

cannot always hit the target. It will vary due to presence of noise, and the loss function must

reflect the variation of many pieces rather than just single piece. An equation can be derived by

summing the individual loss values and dividing by their number to give

( )22L k yσ τ = + −

Where L = the average or expected loss, σ is the process variability of y charecteristic, y is the

average dimension coming out of the process.

Because the population standard deviation, σ , is unknown, the sample standard deviation, s, is

used as a substitute. This action will make the variability value somewhat larger. However, the

average loss (Figure 3-15) is quite conservative in nature.

Page 119: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Figure 3-15 Average or Expected Loss

The loss can be lowered by reducing the variation, and adjusting the average, y, to bring it on

target.

Lets compute the average loss for a process that produces shafts. The target value , say 6.40 mm

and the loss coefficient is 9500. Eight samples give reading of 6.36, 6.40, 6.38, 6.39, 6.43, 6.39,

6.46, and 6.42. Thus,

s = 0.0315945 6.40375y =

( )

( )

22

22 =9500 0.0315945 6.40375 6.40

=$9.62

L k s y τ = + − + −

There are two other loss functions that are quite common, smaller-the-better and larger- the-

better. In smaller-the-better type, the lesser the value is preferred for the characteristic of interest,

say defect rate, expected cost, and engine oil consumption. Figure 3-16 illustrates the concept.

Page 120: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Figure 3-16 : (a) Smaller –the –Better and (b) Larger–the- Better-type of Loss Function

To summarize the equations for the three common loss functions, Nominal the best

( )2L k y τ= − Where 2/k A= ∆ L =k (MSD) where MSD= ( )2 /y nτ Σ + −

( )22L k yσ τ = + −

Smaller the better 2L ky= where 2/k A y=

L =k(MSD) where MSD= 2 /y n Σ

2 2L k y σ = +

Larger the better

( )21L k y= − where 2k Ay=

L =k(MSD) where MSD= ( )21 / /y n Σ

( )21 / /L k y n = Σ

In case of larget-the-better, higher value is preferred for the characteristic of interest. Few

examples of performance characteristics considered as larget-the-better are bond strength of

adhesives, welding strength, tensile strength, expected profit.

Page 121: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Orthogonal Arrays

Taguchis method emphasized on highly fractionated factorial design matrix or Orthogonal arrays

(OA) [http://en.wikipedia.org/wiki/Orthogonal_array] for experiment. This arrays are developed

by Sir R. A. Fischer and with the help of Prof C R Rao (http://en.wikipedia.org/wiki/C._R._Rao)

of Indian Statistical Institute, Kolkata. A L8 orthogonal array is shown below. An orthogonal

array is a type of experiment where the columns for the independent variables are “orthogonal”

or “independent” to one another.

Table 3-4 L8 Orthogonal Array

The 8 in the designation OA8 (Table 3-4) represents the number of experimental rows, which is

also the number of treatment conditions (TC). Across the top of the orthogonal array is the

maximum number of factors that can be assigned, which in this case is seven. The levels are

designated by 1 and 2. If more levels occur in the array, then 3, 4, 5, and so forth, are used. Other

schemes such as -1, 0, and +1 can be used. The orthogonal property of an OA is not

compromised by changing the rows or the columns. Orthogonal arrays can also handle dummy

factors and can be accordingly modified. With the help of OA the number of trial or experiments

can be drastically reduced.

Page 122: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

To determine the appropriate orthogonal array, we can use the following procedure,

Step-1 Define the number of factors and their levels.

Step-2 Consider any suspected interactions (if required).

Step-3 Determine the necessary degrees of freedom.

Step-4 Select an orthogonal array.

To understand the required degree of freedom, let us we consider four two-level (leveled as 1

and leveled as 2) factors, A, B, C, D, and two suspected interactions, BC and CD. Thus to

determine the degrees of freedom or df, at least seven treatment conditions (experiments) are

needed for the two-level,

df=4(2-1)+2(2-1)(2-1)+1=7

Selecting the Orthogonal Array

Once the degrees of freedom are known, factor levels are identified, and possible interaction to

be studied, the next step is to select the orthogonal array (OA). The number of treatment

conditions is equal to the number of rows in the OA and must be equal to or greater than the

degrees of freedom. Table 3-5 shows the orthogonal arrays that are available, up to OA36. Thus,

if the number of degrees of freedom is 13, then the next available OA is OA16. The second

column of the table has the number of rows and is redundant with the designation in the first

column. The third column gives the maximum number of factors that can be used, and the last

four columns give the maximum number of columns available at each level.

Analysis of the table shows that there is a geometric progression for the two-level arrays of OA4,

OA8, OAI6, OA32, ... , which is 22, 23, 24, 25, ... . For the three-level arrays of OA9, OA27,

OA8I, ... , it is 32, 33, 34, ..... Orthogonal arrays can also be modified.

Page 123: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Table 3-5 Required Orthogonal Array

Interaction Table

Confounding is the inability to distinguish among the effects of one factor from another

factor and/or interaction. In order to prevent confounding, one must know which columns to use

for the factors in Taguchi method. This knowledge is provided by an interaction table, which is

shown in Table 3-6.

Page 124: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Table 3-6 Interaction Table for OA8

Let's assume that factor A is assigned to column 1 and factor B to column 2. If there is an

interaction between factors A and B, then column 3 is used for the interaction, AB. Another

factor, say, C, would need to be assigned to column 4. If there is an interaction between factor A

(column 1) and factor C (column 4), then interaction AC will occur in column 5. The columns

that are reserved for interactions are used so that calculations can be made to determine whether

there is a strong interaction. If there are no interactions, then all the columns can be used for

factors. The actual experiment is conducted using the columns designated for the factors, and

these columns are referred to as the design matrix. All the columns are referred to as the design

space.

Linear Graphs Taguchi developed a simpler method to work with interactions by using linear graphs.

Page 125: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Figure 3-17 Two linear Graphs for OA8

Two linear graph are shown in Figure 3-17 for OA8. They make it easier to assign factors and

interactions to the various columns of an array. Factors are assigned to the points. If there is an

interaction between two factors, then it is assigned to the line segment between the two points.

For example, using the linear graph on the left in the figure, if factor B is assigned to column 2

and factor C is assigned to column 4, and then interaction BC is assigned to column 6. If there is

no interaction, then column 6 can be used for a factor.

The linear graph on the right can be used when one factor has three two-level or higher order

interactions. Three-level orthogonal arrays must use two columns for interactions, because one

column is for the linear interaction and one column is for the quadratic interaction. The linear

graphs-and, for that matter, the interaction tables-are not designed for three or more factor

interactions, which are rare events. Linear graphs can also be modified. Use of the linear graphs

requires some trial-and-error activity

Interactions

Interactions simply means relationship existing between different X-factors/X with noise

variables considered for experiment. Figure 3-18 shows graphical relationship between any two

factors. At (a) there is no interaction as the lines are parallel; at (b) there is little interaction

existing between the factors; and at (c) there is a strong evidence of interaction. The graph is

constructed by plotting the points A1B1 A2B2, A2B1 and A2B2.

Page 126: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Figure 3-18 Interaction between Two Factors Signal-to-Noise (SIN) Ratio

The important contribution of Taguchi is proposing the signal-to-noise (S/N) ratio. It was

developed as a proactive equivalent to the reactive loss function. When a person puts his/her foot

on the brake pedal of a car, energy is transformed with the intent to slow the car, which is the

signal. However, some of the energy is wasted by squeal, pad wear, and heat. Figure 3-19

emphasizes that energy is neither created nor destroyed.

Figure 3-19 Concept of Signal-to-Noise (S/N) Ratio

Signal factors (Y) are set by the designer or operator to obtain the intended value

of the response variable. Noise factors (S2) are not controlled or are very expensive or difficult to

control. Both the average, y, and the variance, s2, need to be controlled with it single figure of

merit. In elementary form, S/N is /y s , which is the inverse of the coefficient of variation and a

Page 127: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

unit less value. Squaring and taking the log transformation gives

( )2 210/ logNS N y s= −

Adjusting for small sample sizes and changing from Bels to decibels for NTB type gives

( ) ( )2 210/ 10log 1 /NS N y s n = − −

There are many different S/N ratios. The equation for nominal-the-best was given above. It is

used wherever there is a nominal or target value and a variation about that value, such as

dimensions, voltage, weight, and so forth. The target ( )τ is finite but not zero. For robust

(optimal) design, the S/N ratio should be maximized. The-nominal-the-best S/N value is a

maximum when the average is near target and the variance is small. Taguchi's two-step

optimization approach is to identify factors (X) which reduces variation of Y, and then bring the

average (Y) on target by a different set of factor (X). The he S/N ratio for a process that has a

temperature average of 21°C and a sample standard deviation of 2°C for four observations is

given by

( ) ( )

( ) ( )

2 210

2 210

/ 10log 1 /

=10log 21 2 1 / 4

=20.41 dB

NS N y s n = − − − −

The adjustment for the small sample size has little effect on

the answer. If it had not been used, the answer would have been 20.42 dB.

Smaller-the-Better

The S/Ns ratio for smaller-the-better is used for situations where the target value ( )τ is

zero, such as computer response time, automotive emissions, or corrosion. The S/N equation

used is

( )210 10/ 10log 10log /SS N MSD y n = − = − Σ

The negative sign ensures that the largest S/N value gives the optimum value for the

response variable and, thus a robust design. Mean square deviation (MSD) is given

to show the relationship with the loss function.

Page 128: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Larger-the-Better

The S/N ratio for larger-the-better type of characteristic is given by

( )210 10/ 10log 10log 1 / /LS N MSD y n = − = − Σ

Let us consider a battery life experiment. For the existing design, the lives of three AA batteries

are calculated as 20, 22, and 21 hours. A different design produces batteries life of 17, 21, and 25

hours. To understand which is a better design (E or D) and by how much, we can use the S/N

ratio calculation. As it is a larger-the-better (LTB) type of characteristic (Response), the

calculation are

( ) = − Σ + + +

210

10 2 2 2

/ 10log 1 / /

1 1 1 =-10log /320 22 25

=26.42 dB

ES N y n

( ) = − Σ + + +

210

10 2 2 2

/ 10log 1 / /

1 1 1 =-10log /317 21 25

=26.12 dB

DS N y n

26.42 26.12 0.3 db∆ = − =

The different design is 7% better than existing design. More data will be required to confirm the

result and so-called ‘Confirmatory trials’.

Although the metric signal-to-noise ratio have achieved good practical results, they are yet to be

accepted universally as a valid statistical measure. The controversy is on measures and shape of

loss function. However, Taguchi’s concept has resulted in a paradigm shift in the concept of

product quality and can optimize without any empirical regression modeling concept.

It is also to be noted that inner (controllable factors) and outer array (for noise variable) design is

recommended by Taguchi to understand the best setting for Robust Design, which many a times

researchers omit for ease of experimentation. This practice may be avoided. Engineering

Page 129: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

knowledge and idea of interaction is essential to get the best benefit out of OA design. For further

details on Taguchi method, reader may refer the books written by P J Ross (1996), A Mitra

(2008),Besterfield et at. (2004) and M Phatke (1995).

Page 130: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Module III Product Quality Improvement

Lecture 5 - How six sigma philosophy is aligned with product quality improvement?

In 2000, M. Harry and R. Schroeder published ‘Six Sigma: The Breakthrough Management

Strategy Revolutionizing the World's Top Corporations’. Since that time, there has

been considerable interest in this subject. In this book, the authors devoted much space to a

review of the concept. In the Six Sigma world, the Quality Planning Process is referred to as

Design for Six Sigma (DFSS). DFSS is focused on creating new or modified product designs that

are capable of significantly higher levels of performance (using Six Sigma Methodology). They

emphasized on a Define-Measure-Analyze-Design-Verify (DMADV) sequence of quality

planning and design methodology that can be used for product or service desing. The DFSS

matrix is a tool which captures the important quality planning information that allows a six

sigma team to record the vital planning information and deliver as required in the DMADV

phases.

Statistical Concept for Six Sigma

According to James Harrington, "Six sigma is simply a TQM process that uses process capability

analysis as a way of measuring improvement”. Sigma, σ , is the Greek symbol for the statistical

measurement of dispersion, so-called standard deviation. It is the best measurement of process

variability, as smaller the deviation value, the less variability is there in the process. Figure 3.20

shows measurement (Y characteristic) on samples collected from that is normally distributed and

centered exactly on target, having the upper and lower specification limit (USL and LSL). The

estimated ± 6σ fits exactly with specification limit. For such situation, 99.9999998% of the

product or service will be between specifications, and the nonconformance rate will be 0.002

parts per million, or 2.0 parts per billion. The situation diagrammed represents a process

capability index (Cp & Cpk) of 2.0. A Cpk of 1.33 has been considered in industry as a de-facto

standard earlier. Table 3-7 shows the percent between specifications, the nonconformance rate,

and process capability for different specification limit locations.

Page 131: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Figure 3-20 Normal Distribution and Specification bound for a Quality Characteristics

Table: 3-7 Process centered on Target

Specification limit Percent conformance

Nonconformance

rate(ppm)

Process capability

(Cpk)

±1σ 68.70 317300 0.33

±2σ 95.45 485500 0.67

±3σ 99.73 2700 1

±4σ 99.9937 63 1.33

±5σ 99.999943 0.57 1.67

±6σ 99.9999998 0.002 2

According to the six-sigma philosophy, any process rarely stay centered-the center tends to

"shift" above and below the target, µ . Figure 3-21 shows a process that is normally distributed,

but has shifted within a range of 1.5σ above and 1.5σ below the target. For the Figure 3-21

situation, 99.9996600% of the product output or service output characteristic will be between

specifications and the nonconformance rate will be 3.4 ppm. The off-center situation gives a

process capability index (Cpk) of 1.5 with 1.33 being the defacto standard previously. Table 3-8

shows the percent between specifications, the nonconformance rate, and capability for different

specification limit locations for an off-centered process. The magnitude and type of shift is a

Page 132: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

matter of analysis and should not be assumed ahead of time. There is rare evidence of case

studies in literature that indicates a shift more than 1.50 σ. The automotive industry recognized

the concept of Six Sigma in the mid-1980's, evaluated it and deemed it acceptable. It is to be

noted that the original work on six sigma was based on a few empirical studies of a single

output.

Figure 3-21 Shift in process output characteristic mean

Table: 3-8 Process off-centered by 1.5 σ

Specification limit

Percent

conformance

Nonconformance

rate(ppm)

Process capability

(Cpk)

±1σ 30.23 697700 -0.167

±2σ 69.13 308700 0.167

±3σ 93.32 66810 0.5

±4σ 99.379 6210 0.834

±5σ 99.9767 2330 1.167

±6σ 99.9966 3.4 1.5

The statistical aspects of six-sigma tell us that we should reduce the process variability, σ, and try

to keep the process centered on the target,µ . These concepts are not new, and had been long

advocated by Shewhart, E Deming, and G Taguchi.

Page 133: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

Six Sigma DMAIC Methodology

The standard problem-solving approach used in Six Sigma is known as DMAIC(Define,

Measure, Analyze, Improve, and Control).

Define Phase

After a Six Sigma project is selected, the first step is to clearly define the problem. This

activity is significantly different from project selection. Project selection generally

responds to symptoms of a problem and usually results in a rather abstract problem

statement. One must describe the problem in operational or measurable terms that facilitate

further analysis. For example, a firm might have a history of poor reliability of electric

generator it manufactures, resulting in a Six Sigma project to improve generator reliability.

A preliminary investigation of warranty and field service repair data might suggest that the

source of most problems are brush wear, and more specifically, suggest a problem with

brush hardness variability. Thus, the problem might be defined as "reduce the variability of

brush hardness." This process of drilling down to a more specific problem statement is

sometimes called project scoping.

A good problem statement also should identify the customer (external or internal) and the

CTQ (Critical-to-Quality) Characteristics that have the most impact on product or service

performance, describe the current level of performance or the nature of errors or customer

complaints, identify the relevant performance metrics, benchmark best performance

standards, calculate the cost/ revenue implications of the project, and quantify the expected

level of performance from a successful Six Sigma effort. The Define phase should also

address such project management issues as what will need to be done, by whom, and

when.

Measure

This phase of the DMAIC process focuses on how to measure the internal

processes that impact CTQ’s. It requires an understanding of the causal relationship(s)

between process performance and customer concept of value. However, once

Page 134: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

they are understood, procedures for gathering facts-collecting reliable data or observations,

and careful listening-must be defined and implemented. Data from existing production

processes and practices often provide important information, as does feedback from

supervisors, workers, customers, and field service employees. An important concept

required at this stage is ‘Opportunity’ and ‘Rolled Throughput Yeild’

(http://asq.org/qic/display-item/?item=15398).

Analyze

A major flaw in many problem-solving approaches is a lack of emphasis on rigorous

statistical analysis. Too often, we want to jump to a solution without fully understanding

the nature of the problem and identifying the source of the problem. The Analyze phase of

DMAIC focuses on why defects, errors, or excessive variation occur.

After potential causal variables are identified, statistical experiments are conducted to

verify them. These experiments generally consist of formulating some hypothesis to

investigate, collecting data, analyzing the data, and reaching a reasonable and statistically

supportable conclusion. Statistical thinking and analysis plays a critical role in this phase.

It is one of the reasons why statistics plays an important part in Six Sigma training.

Improve

Once the root cause of a problem is understood, the analyst or team needs to generate

ideas for removing or resolving the problem and improve the performance measures or

CTQ. This idea-gathering phase is a highly creative activity, because many optimal

solutions are not obvious. One of the difficulties in this task is the natural instinct to

prejudge ideas before thoroughly evaluating them. Most people have a natural fear of

proposing a "silly" idea or looking foolish. However, such ideas may actually form the

basis for a creative and useful solution. Effective problem solver must learn to defer

judgment and develop the ability to generate a large number of ideas at this stage of the

process, whether practical or not.

After a set of ideas have been proposed, it is necessary to evaluate them and select the

most promising. This process includes confirming that the proposed solution will

Page 135: Module I. Introduction to Quality Management Lecture – 1 ... · Another quality guru, J. M. Juran, visited Japan in 1954 and further impressed upon the strategic role that management

positively impact the key process output variables or CTQ, and identifying the maximum

acceptable ranges of these variables.

Problem solutions often entail technical or organizational changes. Often some sort of

decision model is used to assess possible solutions against important criteria such as cost,

time, quality improvement potential, resources required, effects on supervisors and

workers, and barriers to implementation such as resistance to change or organizational

culture. To implement a solution effectively, responsibility must be assigned to a person or

a group who will follow through on what must be done, where it will be done, when it will

be done, and how it will be done.

Control

The Control phase focuses on how to maintain the improvements, which includes putting

statistical process control (SPC) in place to ensure that the key variables remain within

the naturally acceptable limits under the modified process. These improvements might

include establishing the new standards and procedures, training the workforce, and

instituting controls to make sure that improvements do not die over time. Controls might

be as simple as using checklists or periodic status reviews to ensure that proper

procedures are followed.

Overall, Six Sigma Methodology is to work smarter not harder. It also emphasizes on

measurement(s) that impact customer, ways to improve the process, and decision making

based on firm statistical concept. Reader may refer any standard book/references given

below to learn further on Six Sigma Methodology.