Total Quality Management-study Material for supervisors

66
TOTAL QUALITY MANAGEMENT Delivering products with a level of quality that meets customer requirements is essential to business success. Indeed, in the fierce competition of today’s global markets, the level of quality needs to exceed what customers already expect, and at a competitive price. Although companies all over the world proclaim that quality is their topmost priority, this claim is only partially true. Surely, improvements have been made because of increasing quality awareness and ISO standards implementation. However, that is not good enough. We still produce and market for price. Why? We have been conditioned to buy on price considerations. If we investigate a little, it is easy to find that enemy to quality is us. Many of us invest in shares and expect a good return. Corporate managers, Individual investors and institutional investors all are under pressure looking for great returns on their investment in short term, to an extent where quality often takes a second seat to profits. In the software industry, it is not unusual to find a new software launched although bugs are still present and ‘beta’ site testing is not complete. In the food industry, the labels may present misleading information about ingredients and their nutritional value. In sophisticated testing equipment, it is common to find that customers have a tough time fixing the problem and getting help from the company that supplied it. Nevertheless, in all industries quality is proclaimed as the only way to true improvement. The pressure is on everyone to perform, but performance alone will not do it. An organization must produce efficiently and effectively in order to survive. One way of survival is through quality. This quality, however, has to be totally in the minds of employees, corporate management and public at large. We must as a society try to do our best and as an organization try to be good corporate citizens. If that means we have to tell the truth, educate our employees and our stockholders, and focus on long- term survival rather than short term gains, so be it. If we practice quality, then the integrity of everything we do is the primary issue. We must recognize that cutting corners is not the way to practice quality. Quality is practiced by having a vision, goals, and appropriate tools and action plans.

description

total quality fundamentals

Transcript of Total Quality Management-study Material for supervisors

TOTAL QUALITY MANAGEMENT

Delivering products with a level of quality that meets customer requirements is essential to business success. Indeed, in the fierce competition of today’s global markets, the level of quality needs to exceed what customers already expect, and at a competitive price.

Although companies all over the world proclaim that quality is their topmost priority, this claim is only partially true. Surely, improvements have been made because of increasing quality awareness and ISO standards implementation. However, that is not good enough. We still produce and market for price. Why? We have been conditioned to buy on price considerations. If we investigate a little, it is easy to find that enemy to quality is us. Many of us invest in shares and expect a good return. Corporate managers, Individual investors and institutional investors all are under pressure looking for great returns on their investment in short term, to an extent where quality often takes a second seat to profits. In the software industry, it is not unusual to find a new software launched although bugs are still present and ‘beta’ site testing is not complete. In the food industry, the labels may present misleading information about ingredients and their nutritional value. In sophisticated testing equipment, it is common to find that customers have a tough time fixing the problem and getting help from the company that supplied it. Nevertheless, in all industries quality is proclaimed as the only way to true improvement.

The pressure is on everyone to perform, but performance alone will not do it. An organization must produce efficiently and effectively in order to survive. One way of survival is through quality. This quality, however, has to be totally in the minds of employees, corporate management and public at large. We must as a society try to do our best and as an organization try to be good corporate citizens. If that means we have to tell the truth, educate our employees and our stockholders, and focus on long-term survival rather than short term gains, so be it. If we practice quality, then the integrity of everything we do is the primary issue. We must recognize that cutting corners is not the way to practice quality. Quality is practiced by having a vision, goals, and appropriate tools and action plans.

Total Quality Management is a proven method of managing and continually improving quality. The successful implementation of its principles in Japanese firms has presented a roadmap for quality all over the world.

Achieving this quality will involve the entire company – and often suppliers and customers as well. It requires good management systems and practices throughout the organization, from having a vision of the future of the company to maintaining a safe and healthy workplace. It means having well-trained and motivated employees, standardized work procedures, and effective production control. It means ensuring the quality of incoming supplies, and operating a fast and efficient after-sales service. Above all, it requires the active participation of senior management. In short, every function in the company, and every member of staff can and must support quality, hence the name Total Quality Management (TQM).

What is Quality?

Defining quality is not as easy as it seems because different people have different perceptions of what constitutes quality. Many consumers have difficulty defining quality

and cannot pinpoint their quality standard in precise terms. Is Mercedes a good quality car? What about BMW? Further, the meaning of quality evolved with time.

Today, there is not a single universal definition of quality. Some people view it as ”conformance to specifications”. Others define it as “meeting the customer requirements” .Let us examine a few common definitions.

Conformance to specifications measures how well the product or service meets the targets and tolerances determined by its designers. For example, the dimensions of a machine part may be specified by its design engineers as10+/-0.5 mm. This would mean that the target dimension is 10mm but the dimensions can vary between 9.5 to 10.5 mm. Conformance to specification is directly measurable, though it may not be directly related to the consumer’s idea of quality.

Fitness for use focuses on how well the product performs its intended function or use. For example, a Mercedes and a Landrover both meet a fitness for use definition if one considers transportation as the intended function. However, if the definition becomes more specific and assumes that the intended use is for transportation on mountain roads, the Landrover has a greater fitness for use. You can also see that fitness for use is a user-based definition in that it is intended to meet the needs of a specific user group.

Value for price paid is a definition of quality that consumers often use for productor service usefulness. This is the only definition that combines economics with consumer criteria; it assumes that the definition of quality is price sensitive. For example, suppose that you wish to take a travel insurance policy and discover that the same policy is being offered by two different companies with different premium rates. If you take the less expensive policy, you will feel that you have received greater value for the price.

Defining quality in manufacturing organizations is often different from that of services.Manufacturing organizations produce a tangible product that can be seen, touched, and directly measured .For example, cars, computers and clothes. Therefore, quality definitions in manufacturing usually focus on tangible product features.

In contrast to manufacturing, service organizations produce a product that is intangible.Usually, the complete product cannot be seen or touched. Rather, it is experienced.Examples include quality of health services at a hospital, and quality of education at a university.

In a nutshell, given the level of competition in today’s market place, we might define quality as meeting and improving upon customer requirements.

Traditionally, the notion has been that if we produce something that does not meet customer requirements, we need to find the deficiency before giving it to the customer. Hence, the emphasis was on defining sampling plans, operating curves, acceptable quality levels, and so on and is basically a reactive approach. This approach is no longer acceptable. Quality is now viewed as designing processes that prevent errors as opposed to finding them and is expected to be proactive.

The reason quality has gained such prominence is that organizations have gained an

understanding about the high cost of poor quality. Quality affects all aspects of the organization and has dramatic cost implications. The most obvious consequence occurswhen poor quality creates dissatisfied customers and eventually leads to loss of business.

COST OF QUALITY

The term cost of quality(COQ) is really a misnomer- it is more accurately described as the cost of non-quality, a measurement which indicates how much it costs per year to provide quality in everything it does, whether products or services.

The majority of senior managers are unaware of the true costs of how much it costs them’ to get things wrong’. Most of them are ignorant about it for the simple reason that COQ never appears on a balance sheet. In many cases, they have never been asked to measure it and would not know where to start.

When companies start measuring COQ, they find that the results are staggering. In Europe, manufacturing companies were found to operate with a COQ of 15-25% of turnover. In the service sector, it is 40-50% of turnover. In some public sector units, they were found to be still higher! Companies like IBM, British Telecom and Jaguar originally experienced COQ as high as 40%.People wondered how their company actually made a profit.

Costing quality is an extremely important activity and it has driven many companies to embrace TQM in order to reduce COQ in the long run, which is achievable and possible.COQ is an exercise organizations have to pursue if they wish to improve their competitive edge.

The first mistake is to measure only tangible items, for example, direct labour for rework, scrap and wastage of materials. The costs in manufacturing are low- but the rework in the service area, which are huge, tend to be ignored. Deming, the quality guru, claims that 85% of quality problems are created by people who never touch the product. Measure the intangibles in service areas first. This is where real waste is evident.

Let us illustrate this with an example. Managements are keen to promote the market share of their product over competitors. Marketing teams spend lot of time with customers and are keen to agree to their special requirements. In many cases, they agree to modifications to their standardized product. This is all fine- but they should be careful that these modifications do not become excessive or time consuming for those in manufacturing to produce. The company starts to get a name for becoming customer rather than product led. Meanwhile, new orders pour into the company. It creates further problems for designers and draughtsmen, for inventory control officers who have to order non- standard components, for shopfloor which has to develop new approaches to manufacturing and assembly. The few agreed modifications change the large batch production lines into a jobbing shop. Material management issues arise and cost of inventory goes high. The lesson is that companies can go out of business with full order books – if they fail to promote internal dynamics to support an external demand for customized sales.

The cost of quality has two components :i) Avoidance costs and ii) total failure costs.

i) Avoidance costs: They are further broken down into two categories- appraisal costs and prevention costs

a) Appraisal costs:

These costs are the expenditures made by the organization to examine the levels of quality at which products are being produced. If product quality levels are satisfactory, production is allowed to continue .Otherwise, production is suspended till effective corrective action has been implemented and satisfactory levels of product quality are achieved. Appraisal costs are expected to discover and correct problems after the problems surface. Examples are all inspection related expenses including salaries and fringe benefits, test samples for destructive tests, lab materials consumed in process tests, laboratory tooling expenses, costs associated with SPC, inspections and tests within the process.

b) Prevention costs

These are those expenditures a company makes to avoid producing defective or unacceptable product. These are applied before the defective component is produced. Examples are all training, quality assurance expenses, cost of design changes, the portion of design engineering devoted to quality assurance and cost of processing changes incurred prior to release for production.

ii) Total failure costs have two components- Internal failure costs and External failure costs

a) Internal failure costs are those incurred by the company while it still has ownership of the product. Examples: scrap, waste in process, rework and repair charges.

b) External failure costs are those incurred by an organization after it has transferred ownership to its customer. Examples are warranty costs, returned goods, design error and marketing error.

Companies that consider quality important invest heavily in prevention and appraisalcosts in order to prevent internal and external failure costs. The earlier defectsare found, the less costly they are to correct. For example, detecting and correcting defects during product design and product production is considerably less expensivethan when the defects are found at the customer site.

Quality Gurus

To fully appreciate the development of TQM movement, it is essential to understand the valuable contributions made by a few Quality Gurus.

Walter A.Shewart

Walter A.Shewart was a Statistician at Bell Labs in the period 1920-30.He recognized that variability existed in all manufacturing processes. He developed quality control charts that are used to identify whether the variability in the process is random or due to an assignable cause, such as poor workers or miscalibrated machinery. He stressed thateliminating variability improves quality. His work created the foundation for today’s statistical process control, and he is often referred to as the “grandfather of quality control”.

W.Edwards Deming

W. Edwards Deming is often referred to as the “father of quality control” and is one of the most inspirational and influential gurus of quality in the twentieth century. He was a statistics professor at New York University in the 1940s. After World War II he assisted many Japanese companies in improving quality. The Japanese regarded him so highly that in 1951 they established the Deming Prize, an annual award given to firms that demonstrate outstanding quality. It was almost 30 years later that American businesses began adopting Deming’s philosophy.

A number of elements of Deming’s philosophy depart from traditional notions of quality. The first is the role management should play in a company’s quality improvement effort. Historically, poor quality was blamed on workers—on their lack of productivity, laziness, or carelessness. However, Deming pointed out that only 15 percent of quality problems are actually due to worker error. The remaining 85 percent are caused by processes and systems, including poor management.

Deming said that it is up to management to correct system problems and create an environment that promotes quality and enables workers to achieve their full potential. He believed that managers should drive out any fear employees have of identifying quality problems, and that numerical quotas should be eliminated. Proper methods should be taught, and detecting and eliminating poor quality should be everyone’s responsibility.

Deming outlined his philosophy on quality in his famous “14 Points.” These points are principles that help guide companies in achieving quality improvement. The principles are founded on the idea that upper management must develop a commitment to quality and provide a system to support this commitment that involves all employees and suppliers. Deming stressed that quality improvements cannot happen without organizational change that comes from upper management.

Deming’s 14 points for transformation of Management:1. Create constancy of purpose towards improvement of product and service with

the aim to become competitive, to stay in business and to provide jobs.2. Adopt a new philosophy

3. Cease reliance on mass inspection to achieve quality. Eliminate the need for inspection on a mass basis by building quality into the product in the first place.

4. End the business of awarding business on the basis of price alone.5. Constantly improve the system of production and service; this will improve quality

and productivity and constantly decrease costs6. Institute training on the job.7. Institute leadership; the aim of supervision should be to help people and

machines and gadgets to do a better job.8. Drive out fear so that everyone may work effectively for the company.9. Break down barriers between departments10. Eliminate slogans11. Substitute leadership for work standards and management by objectives.12. Remove barriers that rob the hourly workers and management of their right to

pride of workmanship. The responsibility of supervisors must be changed from mere numbers to quality. End annual reviews or merit rating and management by objectives.

13. Institute a rigorous program of education and self-improvement.14. Put everybody in the company to work to accomplish the transformation-it is

everybody’s job.

Some valuable quotations from Deming:

a) Knowledge is the key ingredient of qualityb) Quality must come first. As quality increases, costs decrease.c) By allowing and even urging workers to experience the intrinsic rewards that

come from doing something well and by using their innate and acquired abilities, productivity improves, quality improves, and customer satisfaction improves.

d) Running a company by profit alone is like driving a car by looking in the rearview mirror. It tells where you have been, not where you are going.

e) Inspection is too late. If workers can produce defect-free goods, eliminate inspectors.

f) Buy from suppliers committed to quality. Invest time and knowledge to help suppliers to improve quality and costs. Develop long-term relationships with suppliers.

g) The financial statements are not reality. They are financial descriptions of the past, a one-dimensional picture of a multidimensional world.

h) When top management blames every accident on the lax behaviour of workers, they are admitting their ignorance and abdicating their responsibilities.

i) It is a mistake to believe that one is improving quality when an inspector rejects a defective product or when a major quality flaw is found. That is just recognizing a defect produced by a system; it is not improving quality and it is not improving the system.

j) Cooperation is fundamental ingredient that leads to improvement. In conventional thinking, competition is always preferred over cooperation.

k) The company that develops loyal customers has much higher earnings than the company that just pushes the product out of the door.

Deming’s cycle of continuous improvement

The PDCA Cycle was developed by Deming, and is also known as the Deming Wheel.The basic concept is that first you plan what you are going to do, then you do it, thenyou check the results. If the results are OK, you standardise your plan and put it intoregular use. If the results are not satisfactory you make changes to your plan, try itagain, and if this time the results are OK you standardize the changed version of yourplan, and put it into regular use. The PDCA Cycle can be used for the simplest jobs orfor the most complex company activities. It is an excellent system for a manager to useto continuously improve the level of quality in his or her department.

Dr.J.M.Juran

After W. Edwards Deming, Dr. Joseph Juran is considered to have had the greatest impact on quality management. Juran originally worked in the quality program at Western Electric. He became better known in 1951, after the publication of his book Quality Control Handbook. In 1954 he went to Japan to work with manufacturers and teach classes on quality. Though his philosophy is similar to Deming’s, there are some differences. Whereas Deming stressed the need for an organizational “transformation,” Juran believes that implementing quality initiatives should not require such a dramatic change and that quality management should be embedded in the organization.

One of Juran’s significant contributions is his focus on the definition of quality and the cost of quality. Juran is credited with defining quality as fitness for use rather than simply conformance to specifications. Defining quality as fitness for use takes into account customer intentions for use of the product, instead of only focusing on technical specifications. Juran is also credited with developing the concept of cost of quality, which

allows us to measure quality in financial terms rather than on the basis of subjective evaluations.

Juran is well known for originating the idea of the quality trilogy: quality planning, quality control, and quality improvement. The first part of the trilogy, quality planning, is necessary so that companies identify their customers, product requirements,,and overriding business goals. Processes should be set up to ensure that the quality standards can be met. The second part of the trilogy, quality control, stresses the regular use of statistical control methods to ensure that quality standards are met and to identify variations from the standards. The third part of the quality trilogy is quality improvement. According to Juran, quality improvements should be continuous as well as breakthrough. Together with Deming, Juran stressed that to implement continuous improvement workers need to have training in proper methods on a regular basis.

Other important issues that Juran highlighted are:

The notion that quality is not free because of law of diminishing returns. There is an optimum point of quality beyond which conformance is more costy than the quality obtained.

The role of purchasing’s role in quality and control of the suppliers , because they are part of the quality chain. Effective supplier qualification and surveys are vital to ensure that the supplier can consistently manufacture to specifications.

Single sourcing can be counterproductive for an organization since a single source can more easily neglect to sharpen its competitive edge in quality, cost and service.

Armand V.Feigenbaum

Another quality leader is Armand V. Feigenbaum, who introduced the concept of total quality control. In his 1961 book Total Quality Control, he outlined his quality principles in 40 steps. Feigenbaum took a total system approach to quality. He promoted the idea of a work environment where quality developments are integrated throughout the entire organization, where management and employees have a total commitment to improve quality, and people learn from each other’s successes. This philosophy was adapted by the Japanese and termed “company-wide quality control.”

Philip B. Crosby

Philip B. Crosby is another recognized guru in the area of TQM. He worked in the area of quality for many years, first at Martin Marietta and then, in the 1970s, as the vice president for quality at ITT. He developed the phrase “Do it right the first time” and the notion of zero defects, arguing that no amount of defects should be considered acceptable. He scorned the idea that a small number of defects is a normal part of the operating process because systems and workers are imperfect. Instead, he stressed the idea of prevention.

To promote his concepts, Crosby wrote a book titled Quality Is Free, which was published in 1979. He became famous for coining the phrase “quality is free” and for pointing out the many costs of quality, which include not only the costs of wasted labor,

equipment time, scrap, rework, and lost sales, but also organizational costs that are hard to quantify. Crosby stressed that efforts to improve quality more than pay for themselves because these costs are prevented. Therefore, quality is free.

Like Deming and Juran, Crosby stressed the role of management in the quality improvement effort and the use of statistical control tools in measuring and monitoring quality.

Kaoru Ishikawa

Kaoru Ishikawa is best known for the development of quality tools called cause-and-effect diagrams, also called fishbone or Ishikawa diagrams. These diagrams are used for quality problem solving. He was the first quality guru to emphasize the importance of the “internal customer,” the next person in the production process. He was also one of the first to stress the importance of total company quality control, rather than just focusing on products and services.

Dr. Ishikawa believed that everyone in the company needed to be united with a shared vision and a common goal. He stressed that quality initiatives should be pursued at every level of the organization and that all employees should be involved.

Dr. Ishikawa was a proponent of implementation of quality circles, which are small teams of employees that volunteer to solve quality problems.

Genichi Taguchi

Dr. Genichi Taguchi is a Japanese quality expert known for his work in the area of product design. He estimates that as much as 80 percent of all defective items are caused by poor product design. Taguchi stresses that companies should focus their quality efforts on the design stage, as it is much cheaper and easier to make changes during the product design stage than later during the production process.

Taguchi is known for applying a concept called design of experiment to product design. This method is an engineering approach that is based on developing robust design, a design that results in products that can perform over a wide range of conditions. Taguchi’s philosophy is based on the idea that it is easier to design a product that can perform over a wide range of environmental conditions than it is to control the environmental conditions. Taguchi has also had a large impact on today’s view of the costs of quality. He pointed out that the traditional view of costs of conformance to specifications is incorrect, and proposed a different way to look at these costs. Let’s briefly look at Dr. Taguchi’s view of quality costs.

Recall that conformance to specification specifies a target value for the product with specified tolerances. According to the traditional view of conformance to specifications, losses in terms of cost occur if the product dimensions fall outside of the specified limits. However, Dr. Taguchi noted that from the customer’s view there is little difference whether a product falls just outside or just inside the control limits. He pointed out that there is a much greater difference in the quality of the product between making the target and being near the control limit. He also stated that the smaller the variation around the target, the better the quality. Based on this he proposed the following: as conformance values move away from the target, loss increases as a quadratic function.

This is called the Taguchi loss function. According to the function, smaller differences from the target result in smaller costs: the larger the differences, the larger the cost. The Taguchi loss function has had a significant impact in changing the view of quality cost.

THE EIGHT PRINCIPLES OF TQMWhat characterizes TQM is the focus on identifying root causes of quality problems and correcting them at the source, as opposed to inspecting the product after it has been made. Not only does TQM encompass the entire organization, but it stresses that quality is customer driven. TQM attempts to embed quality in every aspect of the organization. It is concerned with technical aspects of quality as well as the involvement of people in quality, such as customers, company employees, and suppliers.

Let us look at the specific concepts that make up the philosophy of TQM.

The eight principles are:1. Customer-Focused Organization2. Leadership3. Involvement of People4. Process Approach5. System Approach to Management6. Continual Improvement7. Factual Approach to Decision-Making and8. Mutually Beneficial Supplier Relationships.

Now let us examine the principles in detail.

Principle 1 - Customer-Focused Organization

“Organizations depend on their customers and therefore should understand current and future customer needs, meet customer requirements and strive to exceed customer expectations”.Steps in application of this principle are:1. Understand customer needs and expectations for products, delivery, price, dependability, etc.

2. Ensure a balanced approach among customers and other stake holders (owners, people, suppliers, local communities and society at large) needs and expectations.3. Communicate these needs and expectations throughout the organization.4. Measure customer satisfaction and act on results, and5. Manage customer relationships.

Here are two examples of explicit guidelines used to focus on the customers.

The first example is that of HP. They recommend each worker / employee / department to raise the following questions:

1. Who are my customers?2. What are their needs ?3. What is my product or service?4. What are my customers' measures or expectations?5. What is my process for meeting their needs?6. Does my product or service meet these needs ?7. What actions are needed to improve my process ?

The second example is from Motorola.

1. Identify the work you do.2. Identify whom you do it for.3. What do you need to do your work? from whom?4. Map the process.5. Mistake-proof the process and eliminate delays6. Establish quality and cycle time (flow time) measurements and improve goals.

Principle 2 - Leadership

“Leaders establish unity of purpose and direction of the organization. They should create and maintain the internal environment in which people can become fully involved in achieving the organization’s objectives.”

Steps in application of this principle are:

1. Be proactive and lead by example.2. Understand and respond to changes in the external environment.3. Consider the needs of all stake holders including customers, owners, people,suppliers, local communities and society at large.4. Establish a clear vision of the organization’s future.5. Establish shared values and ethical role models at all levels of the organization.6. Build trust and eliminate fear.7. Provide people with the required resources and freedom to act with responsibility and accountability.8. Inspire, encourage and recognize people’s contributions.9. Promote open and honest communication.10. Educate, train and coach people.11. Set challenging goals and targets, and12. Implement a strategy to achieve these goals and targets.

Principle 3 - Involvement of People

“People at all levels are the essence of an organization and their full involvementenables their abilities to be used for the organization’s benefit”.

Steps in application of this principle are:

1. Accept ownership and responsibility to solve problems.2. Actively seek opportunities to make improvements, and enhance competencies, knowledge and experience.3. Freely share knowledge and experience in teams.4. Focus on the creation of value for customers.5. Be innovative in furthering the organization’s objectives.6. Improve the way of representing the organization to customers, local communities and society at large.7. Help people derive satisfaction from their work, and8. Make people enthusiastic and proud to be part of the organization.

Principle 4 - Process Approach

“A desired result is achieved more efficiently when related resources and activitiesare managed as a process.”

Steps in application of this principle are:

1. Define the process to achieve the desired result.2. Identify and measure the inputs and outputs of the process.3. Identify the interfaces of the process with the functions of the organization.4. Evaluate possible risks, consequences and impacts of processes on customers, suppliers and other stake holders of the process.5. Establish clear responsibility, authority, and accountability for managing the process.6. Identify internal and external customers, suppliers and other stake holders ofthe process, and7. When designing processes, consider process steps, activities, flows, control measures, training needs, equipment, methods, information, materials and other resources to achieve the desired result.

Principle 5 - System Approach to Management

“Identifying, understanding and managing a system of interrelated processesfor a given objective improves the organization’s effectiveness and efficiency.”

Steps in application of this principle are:

1. Define the system by identifying or developing the processes that affect a given objective.2. Structure the system to achieve the objective in the most efficient way.3. Understand the interdependencies among the processes of the system.4. Continually improve the system through measurement and evaluation, and5. Estimate the resource requirements and establish resource constraints prior toaction.

Principle 6 - Continual Improvement

“Continual improvement should be a permanent objective of the organization.”

Steps in application of this principle are:

1. Make continual improvement of products, processes and systems an objective for every individual in the organisation.2. Apply the basic improvement concepts of incremental improvement and breakthrough improvement.3. Use periodic assessments against established criteria of excellence to identify areas for potential improvement.4. Continually improve the efficiency and effectiveness of all processes.5. Promote prevention based activities.6. Provide every member of the organization with appropriate education and training, on the methods and tools of continual improvement such as the Plan- Do-Check-Act cycle, problem solving, process re-engineering, and process innovation.7. Establish measures and goals to guide and track improvements, and8. Recognize improvements.

Principle 7 - Factual Approach to Decision Making

“Effective decisions are based on the analysis of data and information.”

Steps in application of this principle are:

1. Take measurements and collect data and information relevant to the objective.2. Ensure that the data and information are sufficiently accurate, reliable and accessible.3. Analyze the data and information using valid methods.4. Understand the value of appropriate statistical techniques, and5. Make decisions and take action based on the results of logical analysis balancedwith experience and intuition.

Principle 8 - Mutually Beneficial Supplier Relationships

“An organization and its suppliers are interdependent, and a mutually beneficialrelationship enhances the ability of both to create value.”

Steps in application of this principle are:

1. Identify and select key suppliers.2. Establish supplier relationships that balance short-term gains with long-term considerations for the organization and society at large.3. Create clear and open communications.4. Initiate joint development and improvement of products and processes.5. Jointly establish a clear understanding of customers’ needs.6. Share information and future plans, and7. Recognize supplier improvements and achievements.

USE OF QUALITY TOOLS

We can see that TQM places a great deal of responsibility on all workers. If employees are to identify and correct quality problems, they need proper training. They need to understand how to assess quality by using a variety of quality control tools, how to interpret findings, and how to correct problems. These are often called the seven tools of quality control. They are easy to understand, yet extremely useful in identifying and analyzing quality problems. Sometimes workers use only one tool at a time, but often a combination of tools is most helpful.

Cause-and-effect diagrams are charts that identify potential causes for particular quality problems. They are often called fishbone diagrams because they look like the bones of a fish. A general cause-and-effect diagram is shown in figure below. The “head” of the fish is the quality problem, such as crack in a forging or a broken pin. The diagram is drawn so that the “spine” of the fish connects the “head” to the possible cause of the problem. These causes could be related to the machines, workers, measurement, suppliers, materials, and many other aspects of the production process. Each of these possible causes can then have smaller “bones” that address specific issues that relate to each cause. For example, a problem with machines could be due to a need for adjustment, old equipment, or tooling problems. Similarly, a problem with workers could be related to lack of training, poor supervision, or fatigue.

Cause-and-effect diagrams are problem-solving tools commonly used by quality control teams. Specific causes of problems can be explored through brainstorming. The development of a cause-and-effect diagram requires the team to think through all the possible causes of poor quality.

A major disadvantage of cause-and-effect diagram is that many causes for a quality problem could appear on a single branch.

A flowchart is a schematic diagram of the sequence of steps involved in an operation or process. It provides a visual tool that is easy to use and understand. By seeing the steps involved in an operation or process, everyone develops a clear picture of how the operation works and where problems could arise.It can be used to facilitate effectiveness during brainstorming session, constructing a cause-and-effect diagram, and in every other situation where there is an ambivalence about what the present state is.

A checklist is a list of common defects and the number of observed occurrences of these defects. It is a simple yet effective fact-finding tool that allows the worker to collect specific information regarding the defects observed. The checklist below shows four defects and the number of times they have been observed in a batch in a garment factory.

It is clear that the biggest problem is ripped material. This means that the plant needs to focus on this specific problem—for example, by going to the source of supply or seeing whether the material rips during a particular production process.

Sl.no. Defect type No.of defects Total1 Broken zipper √√√ 32 Faded colour √√√√ 43 Missing buttons √√ 24 Ripped material √√√√√ 5

A checklist can also be used to focus on other dimensions, such as location or time. For example, if a defect is being observed frequently, a checklist can be developed that measures the number of occurrences per shift, per machine, or per operator. In this fashion we can isolate the location of the particular defect and then focus on correcting the problem.

Control charts are a very important quality control tool. These charts are used to evaluate whether a process is operating within expectations relative to some measured value such as weight, width, or volume. For example, we could measure the weight of a sack of flour, the width of a tire, or the volume of a bottle of soft drink. When the production process is operating within expectations, we say that it is “in control.”

To evaluate whether or not a process is in control, we regularly measure the variable of interest and plot it on a control chart. The chart has a line down the center representing the average value of the variable we are measuring. Above and below the center line are two lines, called the upper control limit (UCL) and the lower control limit (LCL). As long as the observed values fall within the upper and lower control limits, the process is in control and there is no problem with quality. When a measured observation falls outside of these limits, there is a problem.

Scatter diagrams are graphs that show how two variables are related to one another. They are particularly useful in detecting the amount of correlation, or the degree of linear relationship, between two variables. For example, increased production speed and number of defects could be correlated positively; as production speed increases, so does the number of defects. Two variables could also be correlated negatively, so that an increase in one of the variables is associated with a decrease in the other. For example, increased worker training might be associated with a decrease in the number of defects observed.

The greater the degree of correlation, the more linear are the observations in the scatter diagram. On the other hand, the more scattered the observations in the diagram, the less correlation exists between the variables. Of course, other types of relationships can also be observed on a scatter diagram, such as an inverted U. This may be the case when one is observing the relationship between two variables such as oven temperature

and number of defects, since temperatures below and above the ideal could lead to defects.

Pareto analysis is a technique used to identify quality problems based on their degree of importance. The logic behind Pareto analysis is that only a few quality problems are important, whereas many others are not critical. The technique was named after Vilfredo Pareto, a nineteenth-century Italian economist who determined that only a small percentage of people controlled most of the wealth. This concept has often been called the 80–20 rule and has been extended to many areas. In quality management the logic behind Pareto’s principle is that most quality problems are a result of only a few causes. The trick is to identify these causes.

One way to use Pareto analysis is to develop a chart that ranks the causes of poor quality in decreasing order based on the percentage of defects each has caused. For example, a tally can be made of the number of defects that result from different causes, such as operator error, defective parts, or inaccurate machine calibrations. Percentages of defects can be computed from the tally and placed in a chart .We generally tend to find that a few causes account for most of the defects.

In brief, Pareto analysis should be used in various stages of quality improvement to determine which step to take next and to prioritise actions on the basis of frequency of defects.

A histogram is a chart that shows the frequency distribution of observed values of a variable. We can see from the plot what type of distribution a particular variable displays, such as whether it has a normal distribution and whether the distribution is symmetrical.

To construct a histogram

1. Determine how many data values to use2. Determine width of the data by computing the range3. Select the number of cells for the histogram

4. Determine the width of each cell5. Determine the staring number for the first interval6. Calculate the intervals7. Assign data values to the appropriate intervals8. Construct the histogram by drawing bars to represent the cell frequencies

QUALITY FUNCTION DEPLOYMENT

"Time was when a man could order a pair of shoes directly from the cobbler.By measuring the foot himself and personally handling all aspects of manufacturing, the cobbler could assure the customer would be satisfied," lamented Dr.Yoji Akao, one of the founders of QFD, in his lectures.

Quality Function Deployment (QFD) was developed to bring this personal interface to modern manufacturing and business. In today's industrial society, where the growing distance between producers and users is a concern, QFD links the needs of the customer (end user) with design, development, engineering, manufacturing, and service functions.

QFD has been evolved by product development people in response to the major problems in the traditional processes such as a)Disregard the voice of the customer b) Disregard the competition c)Concentration on each specification in isolation d)Low expectations e)Little input from design and production people into product planning f) Divergent interpretation of the specifications g) Lack of structure h) Lost information i) Weak commitment to previous decisions

As a quality system that implements elements of Systems Thinking with elements of Psychology and Epistemology (knowledge), QFD provides a system of comprehensive development process for:

Understanding 'true' customer needs from the customer's perspective

What 'value' means to the customer, from the customer's perspective

Understanding how customers or end users become interested, choose, and are satisfied

Analyzing how do we know the needs of the customer

Deciding what features to include

Determining what level of performance to deliver

Intelligently linking the needs of the customer with design, development, engineering, manufacturing, and service functions

Intelligently linking Design for Six Sigma (DFSS) with the front end Voice of Customer analysis and the entire design system

QFD is a comprehensive quality system that systematically links the needs of the customer with various business functions and organizational processes, such as marketing, design, quality, production, manufacturing, sales, etc., aligning the entire company toward achieving a common goal. It consists in translating customer desires (for example, the ease of writing for a pen) into design characteristics (pen ink viscosity, pressure on ball-point) for each stage of the product development .

In short, the voice of the customer is translated into the voice of the engineer. 

QFD does so by seeking both spoken and unspoken needs, identifying positive quality and business opportunities, and translating these into actions and designs by using transparent analytic and prioritization methods, empowering organizations to exceed normal expectations and provide a level of unanticipated excitement that generates value.

The QFD methodology can be used for both tangible products and non-tangible services, including manufactured goods, service industry, software products, IT projects, business process development, government, healthcare, environmental initiatives, and many other applications.

Beginning with the initial matrix, commonly termed the house of quality, depicted in figure below, the QFD methodology focuses on the most important product or service.

Once you have prioritized the attributes and qualities, QFD deploys them to the appropriate organizational function for action, as shown in Figure 2. Thus, QFD is the deployment of customer-driven qualities to the responsible functions of an organization.

Many QFD practitioners claim that using QFD has enabled them to reduce their product and service development cycle times by as much as 75 percent with equally impressive improvements in measured customer satisfaction.

STEP-BY-STEP APPROACH TO QFD

QFD uses a series of matrices to document information collected and developed and represent the team's plan for a product. The QFD methodology is based on a systems engineering approach consisting of the following general steps:

1. Derive top-level product requirements or technical characteristics from customer needs (Product Planning Matrix).

2. Develop product concepts to satisfy these requirements.3. Evaluate product concepts to select most optimum (Concept Selection Matrix).4. Partition system concept or architecture into subsystems or assemblies and flow-

down higher- level requirements or technical characteristics to these subsystems or assemblies.

5. Derive lower-level product requirements (assembly or part characteristics) and specifications from subsystem/assembly requirements (Assembly/Part Deployment Matrix).

6. For critical assemblies or parts, flow-down lower-level product requirements (assembly or part characteristics) to process planning.

7. Determine manufacturing process steps to meet these assembly or part characteristics.

8. Based in these process steps, determine set-up requirements, process controls and quality controls to assure achievement of these critical assembly or part characteristics.

The matrices and the specific steps in the QFD process are as follows.

Gather Customer Needs

1. Plan collection of customer needs. What sources of information will be used? Consider customer requirement documents, requests for proposals, requests for quotations, contracts, customer specification documents, customer meetings/interviews, focus groups/clinics, user groups, surveys, observation, suggestions, and feedback from the field. Consider both current customers as well as potential customers. Pay particular attention to lead customers as they are a better indicator of future needs. Plan who will perform the data collection activities and when these activities can take place. Schedule activities such as meetings, focus groups, surveys, etc.

2. Prepare for collection of customer needs. Identify required information. Prepare agendas, list of questions, survey forms, focus group/user meeting presentations.

3. Determine customer needs or requirements using the mechanisms described in step 1. Document these needs. Consider recording any meetings. During customer meetings or focus groups, ask "why" to understand needs and determine root needs. Consider spoken needs and unspoken needs. Extract statements of needs from documents. Summarize surveys and other data. Use techniques such as ranking, rating, paired comparisons, or conjoint analysis to determine importance of customer needs. Gather customer needs from other sources such as customer requirement documents, requests for proposals, requests for quotations, contracts, customer specification documents, customer meetings/interviews, focus groups, product clinics, surveys, observation, suggestions, and feedback from the field.

4. Use affinity diagrams to organize customer needs. Consolidate similar needs and restate. Organize needs into categories. Breakdown general customer needs into more specific needs by probing what is needed. Maintain dictionary of original meanings to avoid misinterpretation. Use function analysis to identify key unspoken, but expected needs.

5. Once needs are summarized, consider whether to get further customer feedback on priorities. Undertake meetings, surveys, focus groups, etc. to get customer priorities. State customer priorities using a 1 to 5 rating. Use ranking techniques and paired comparisons to develop priorities.

Product Planning

1. Organize customer needs in the Product Planning Matrix. Group under logical categories as determined with affinity diagramming.

2. Establish critical internal customer needs or management control requirements; industry, national or international standards; and regulatory requirements. If standards or regulatory requirements are commonly understood, they should not be included in order to minimize the information that needs to be addressed.

3. State customer priorities. Use a 1 to 5 rating. Critical internal customer needs or management control requirements; industry, national or international standards;

and regulatory requirements, if important enough to include, are normally given a rating of "3".

4. Develop competitive evaluation of current company products and competitive products. Use surveys, customer meetings or focus groups/clinics to obtain feedback. Rate the company's and the competitor's products on a 1 to 5 scale with "5" indicating that the product fully satisfies the customer's needs. Include competitor's customer input to get a balanced perspective.

5. Review the competitive evaluation strengths and weaknesses relative to the customer priorities. Determine the improvement goals and the general strategy for responding to each customer need. The Improvement Factor is "1" if there are no planned improvements to the competitive evaluation level. Add a factor of .1 for every planned step of improvement in the competitive rating, (e.g., a planned improvement of going from a rating of "2" to "4" would result in an improvement factor of "1.2". Identify warranty, service, or reliability problems & customer complaints to help identify areas of improvement.

6. Identify the sales points that Marketing will emphasize in its message about the product. There should be no more than three major or primary sales points or two major sales points and two minor or secondary sales points in order to keep the Marketing message focused. Major sales points are assigned a weighting factor of 1.3 and minor sales points are assigned a weighting factor of 1.1.

7. The process of setting improvement goals and sales points implicitly develops a product strategy. Formally describe that strategy in a narrative form. What is to be emphasized with the new product? What are its competitive strengths? What will distinguish it in the marketplace? How will it be positioned relative to other products? In other words, describe the value proposition behind this product. The key is to focus development resources on those areas that will provide the greatest value to the customer. This strategy brief is typically one page and is used to gain initial focus within the team as well as communicate and gain concurrence from management.

8. Establish product requirements or technical characteristics to respond to customer needs and organize into logical categories. Categories may be related to functional aspects of the products or may be grouped by the likely subsystems to primarily address that characteristic. Characteristics should be meaningful (actionable by Engineering), measurable, practical (can be determined without extensive data collection or testing)and global. By being global, characteristics should be stated in a way to avoid implying a particular technical solution so as not to constrain designers. This will allow a wide range of alternatives to be considered in an effort to better meet customer needs. Identify the direction of the objective for each characteristic (target value or range, maximize or minimize).

9. Develop relationships between customer needs and product requirements or technical characteristics. These relationships define the degree to which as product requirement or technical characteristic satisfies the customer need. It does NOT show a potential negative impact on meeting a customer need. Consider the goal associated with the characteristic in determining whether the characteristic satisfies the customer need. Use weights (we recommend using 5-3-1 weighting factors) to indicate the strength of the relationship - strong, medium and weak. Be sparing with the strong relationships to discriminate the really strong relationships.

10. Perform a technical evaluation of current products and competitive products. Sources of information include: competitor websites, industry publications,

customer interviews, published specifications, catalogs and brochures, trade shows, purchasing and benchmarking competitor’s products, patent information, articles and technical papers, published benchmarks, third-party service & support organizations, and former employees. Perform this evaluation based on the defined product requirements or technical characteristics. Obtain other relevant data such as warranty or service repair occurrences and costs.

11. Develop preliminary target values for product requirements or technical characteristics. Consider data gathered during the technical evaluation in setting target values. Do not get too aggressive with target values in areas that are not determined to be the primary area of focus with this development effort.

12. Determine potential positive and negative interactions between product requirements or technical characteristics using symbols for strong or medium, positive or negative relationships. Too many positive interactions suggest potential redundancy in product requirements or technical characteristics. Focus on negative interactions - consider product concepts or technology to overcome these potential trade-offs or consider the trade-off's in establishing target values.

13. Calculate importance ratings. Multiply the customer priority rating by the improvement factor, the sales point factor and the weighting factor associated with the relationship in each box of the matrix and add the resulting products in each column.

14. Identify a difficulty rating (1 to 5 point scale, five being very difficult and risky) for each product requirement or technical characteristic. Consider technology maturity, personnel technical qualifications, resource availability, technical risk, manufacturing capability, supply chain capability, and schedule. Develop a composite rating or breakdown into individual assessments by category.

15. Analyze the matrix and finalize the product plan. Determine required actions and areas of focus.

16. Finalize target values. Consider the product strategy objectives, importance of the various technical characteristics, the trade-offs that need to be made based on the interaction matrix, the technical difficulty ratings, and technology solutions and maturity.

17. Maintain the matrix as customer needs or conditions change.

Concept Development

1. Develop concept alternatives for the product. Consider not only the current approach and technology, but other alternative concept approaches and technology. Use brainstorming. Conduct literature, technology, and patent searches. Use product benchmarking to identify different product concepts. Develop derivative ideas. Perform sufficient definition and development of each concept to evalaute against the decision criteria determined in the next step.

2. Evaluate the concept alternatives using the Concept Selection Matrix. List product requirements or technical characteristics from the Product Planning Matrix down the left side of the Concept Selection Matrix. Also add other requirements or decision criteria such as key unstated but expected customer needs or requirements, manufacturability requirements, environmental requirements, standards and regulatory requirements, maintainability / serviceability requirements, support requirements, testability requirements, test schedule and resources, technical risk, business risk, supply chain capability, development resources, development budget, and development schedule.

3. Carry forward the target values for the product requirements or technical characteristics from the Product Planning Matrix. Add target values as appropriate for the other evaluation criteria added in the previous step. Also bring forward the importance ratings and difficulty ratings associated with each product requirement or technical characteristic from the Product Planning Matrix. Normalize the importance rating by dividing the largest value by a factor that will yield "5" and post this value to the "Priority" column. Review these priorities and consider any changes appropriate since these are the weighting factors for the decision criteria. Determine the priorities for the additional evaluation criteria added in the prior step. List concepts across the top of the matrix.

4. Perform engineering analysis and trade studies. Rate each concept alternative against the criteria using a "1" to "5" scale with "5" being the highest rating for satisfying the criteria.

5. For each rating, multiply the rating by the "Priority" value in that row. Summarize these values in each column in the bottom row. The preferred concept alternative(s) will be the one(s) with the highest total.

6. For the preferred concept alternative(s), work to improve the concept by synthesizing a new concept that overcomes its weaknesses. Focus attention on the criteria with the lowest ratings for that concept ("1's" and "2's"). What changes can be made to the design or formulation of the preferred concept(s) to improve these low ratings with the product concept? Compare the preferred concept(s) to the other concepts that have higher ratings for that particular requirement. Are there ways to modify the preferred concept to incorporate the advantage of another concept?

Subsystem/Subassembly/Part Deployment Matrix

1. Using the selected concept as a basis, develop a design layout, block diagram and/or a preliminary parts list. Determine critical subsystems, subassemblies or parts. Consider impact of subsystems, subassemblies or parts on product performance or with respect to development goals. What parts, assemblies or subsystems present major challenges or are critical to the success and operation of the product? What critical characteristics have a major effect on performance? Consider performing failure mode and effects analysis (FMEA); failure mode, effects and criticality analysis (FMECA); or fault tree analysis (FTA) to help pinpoint critical items and their critical characteristics from a reliability/quality perspective.

2. If there will be multiple Subsystem/Subassembly/Part Deployment Matrices prepared, deploy the technical characteristics and their target values to the appropriate matrices. Carry forward the important or critical product requirements or technical characteristics from Product Planning Matrix (based on importance ratings and team decision) to the Subsystem/Subassembly/Part Deployment Matrix. These "product needs" become the "what's" for this next level matrix. Where appropriate, allocate target values (e.g., target manufacturing cost, mean-time between failures, etc.) to the Subsystem / Subassembly / Part Deployment Matrices. Organize these product requirements or technical characteristics by assembly(ies) or part(s) to be addressed on a particular deployment matrix. Include any additional customer needs or requirements to address more detailed customer needs or general requirements. Normalize the Importance Ratings from the Product Planning Matrix and bring them forward as the Priority ratings. Review these priority ratings and make appropriate changes for the subsystems,

subassemblies or parts being addressed. Determine the the Priority for any needs that were added.

3. Considering product requirements or technical characteristics, identify the critical part, subassembly or subsystem characteristics. State the characteristics in a measurable way. For higher-level subsystems or subassembles, state the characteristics in a global manner to avoid constraining concept selection at this next level.

4. Develop relationships between product needs (product-level technical characteristics) and the subsystem / subassembly / part technical characteristics. Use 5-3-1 relationship weights for strong, medium and weak relationships. Be sparing with the strong relationships.

5. Develop preliminary target values for subsystem / subassembly / part characteristics.

6. Determine potential positive and negative interactions between the technical part characteristics using symbols for strong or medium, positive or negative relationships. Too many positive interactions suggest potential redundancy in critical part characteristics. Focus on negative interactions - consider different subsystem / subassembly / part concepts, different technologies, tooling concepts, material technology, and process technology to overcome the potential trade-off or consider the trade-off in establishing target values.

7. Calculate importance ratings. Assign a weighting factor to the relationships (5-3-1). Multiply the customer importance rating by the improvement factor (if any), the sales point factor (if any) and the relationship factor in each cell of the relationship matrix and add the resulting products in each column.

8. Identify a difficulty rating (1 to 5 point scale, five being very difficult and risky) for each subsystem / subassembly / part requirement or technical characteristic. Consider technology maturity, personnel technical qualifications, business risk, manufacturing capability, supplier capability, and schedule. Develop a composite rating or breakdown into individual assessments by category. Determine if overall risk is acceptable and if individual risks based on target or specification values are acceptable. Adjust target or specification values accordingly.

9. Analyze the matrix and finalize the subsystem/subassembly/part deployment matrix. Determine required actions and areas of focus.

10. Finalize target values. Consider interactions, importance ratings and difficulty ratings.

STATISTICAL QUALITY CONTROL TECHNIQUES

TQM focuses on customer-driven quality standards, managerial leadership, continuous improvement, quality built into product and process design, quality identified problems at the source, and quality made everyone’s responsibility. However, talking about solving quality problems is not enough. We need specific tools that can help us make the right quality decisions. These tools come from the area of statistics and are used to help identify quality problems in the production process as well as in the product itself.

Statistical quality control (SQC) is the term used to describe the set of statistical tools used by quality professionals. Statistical quality control can be divided into three broad categories:

1. Descriptive statistics are used to describe quality characteristics and relationships. Included are statistics such as the mean, standard deviation, the range, and a measure of the distribution of data.

2. Statistical process control (SPC) involves inspecting a random sample of the output from a process and deciding whether the process is producing products with characteristics that fall within a predetermined range. SPC answers the question of whether the process is functioning properly or not.

3. Acceptance sampling is the process of randomly inspecting a sample of goods and deciding whether to accept the entire lot based on the results. Acceptance sampling determines whether a batch of goods should be accepted or rejected.

All three of these statistical quality control categories are helpful in measuring and evaluating the quality of products or services. However, statistical process control (SPC) tools are used most frequently because they identify quality problems during the production process.

Process Management According to TQM a quality product comes from a quality process. This means that quality should be built into the process. Quality at the source is the belief that it is far better to uncover the source of quality problems and correct it than to discard defective items after production. If the source of the problem is not corrected, the problem will continue. For example, if you are baking cookies you might find that some of the cookies are burned. Simply throwing away the burned cookies will not correct the problem. You will continue to have burned cookies and will lose money when you throw them away. It will be far more effective to see where the problem is and correct it. For example, the temperature setting may be too high; the pan may be curved, placing some cookies closer to the heating element; or the oven may not be distributing heat evenly.

Quality at the source exemplifies the difference between the old and new concepts of quality. The old concept focused on inspecting goods after they were produced or after a particular stage of production. If an inspection revealed defects, the defective products were either discarded or sent back for reworking. All this cost the company money, and these costs were passed on to the customer. The new concept of quality focuses on identifying quality problems at the source and correcting them.

One way to ensure a quality product is to build quality into the process. Consider Steinway & Sons, the premier maker of pianos used in concert halls all over the world. Steinway has been making pianos since the 1880s. Since that time the company’s manufacturing process has not changed significantly. It takes the company nine months to a year to produce a piano by fashioning some 12,000-hand crafted parts, carefully measuring and monitoring every part of the process. While many of Steinway’s competitors have moved to mass production, where pianos can be assembled in 20 days, Steinway has maintained a strategy of quality defined by skill and craftsmanship. Steinway’s production process is focused on meticulous process precision and extremely high product consistency. This has contributed to making its name synonymous with top quality.

A brief history of SPC

We must go back almost 90 years, to 1923 when Walter Shewart discovered a way to distinguish between common and special causes of variation in a process. His method is now called statistical process control (SPC).

Common cause variation

Shewart realised that we must learn to live with and accept the random common causes that occur in manufacturing. These common causes together form a dispersion pattern that describes the outcome of the process.

Even more important, Shewart understood the danger of reacting to, and acting upon, individual readings of these common causes (see fig. below).

Two common scenarios :Individual sample readings do not show the whole process, which can lead us to draw wrong conclusions.

In figure below, we happen to measure a part (1) which lies fairly close to the upper tolerance limit (UTL). We adjust the process accordingly (2) and thereby, all unwittingly and with the best of intentions, move the whole process so that it occasionally produces defective parts (3).

In the reverse case we happen to measure a part that lies near the middle of the tolerance range. In the belief that this reading is representative of the whole process, we make no adjustments.

What we do not know is that the whole process is badly off-centre and that we are sometimes, between our spot-check samples, producing defective parts (see figure above).

Special cause variationOnly if the position or spread of the process outcome has changed do we have cause to react and make adjustments. These changes are called special causes (see figure below).

See the whole forest, not just one tree

Using statistical process control means that instead of checking individual parts and comparing them with the tolerance limits, we get a general overview of our whole process.

Statistical process control lets the company control quality where it is created – at source.  The process is controlled at the right time, for the right reasons and towards the right objectives. Everybody wins: the operator has better control of the process, there are fewer rejects, and customers are much more satisfied.

Here are two compelling reasons for using SPC:

Quality is best at the target value. All processes vary.

Here are some more reasons:

Better reputation. Your customer will notice the difference compared to your competitors who do not use SPC.

Better quality. You will get early warning signals before you start producing defective parts.

Lower reject costs. The frequency of rejects will almost invariably fall. Clearer objectives. Every member of your workforce will read the process the

same way. Less stress. The work load will, as a rule, become lighter. Skill enhancement.  Production personnel will gain a whole new awareness of

their work and their processes. Cost savings. High process capability reduces the need for final inspection. Fewer stoppages.  Incipient machine faults are detected early, which facilitates

condition-based

The six factors

These are the factors that are generally regarded as causing variation in capability measurements:

Machine (e.g. degree of wear and choice of tooling); Measurement (e.g. resolution and spread of measuring instrument); Operator (e.g. how experienced and careful he/she is); Material (e.g. variations in surface smoothness and hardness); Environment (e.g. variations in temperature, humidity and voltage); Method (e.g. type of machining operation).

Machine capability Machine capability is measured in Cm and Cmk; it is a snapshot picture that shows how well a machine is performing right now in relation to the tolerance limits. Figure below shows some examples.

When measuring machine capability you must not alter measurements or change tools, materials, operators or measurement methods, stop the machine, etc. In other words: Out of the six factors, only machine and measurement are allowed to affect the result.

Cm (capability machine)

The Cm index describes machine capability; it is the number of times the spread of the machine fits into the tolerance width. The higher the value of Cm, the better the machine.

Example: if Cm = 2.5, the spread fits 2½ times into the tolerance width, while Cm = 1 means that the spread is equal to the tolerance width.

Note that even if the spread is off-centre, it is still the same size (Cm index). The figure takes no account of where the spread is positioned in relation to the upper and lower tolerance limits, but simply expresses the relation ship between the width of the spread and the tolerance width (see figure below).

Cmk (capability machine index)

If you also want to study the position of the machine’s capability in relation to the tolerance limits, you use the Cmk index, which describes the capability corrected for position. It is not much use having a high Cm index if the machine setting is way off centre in relation to the middle of the tolerance range.

A high Cmk index means, then, that you have a good machine with a small spread in relation to the tolerance width, and also that it is well centred within that width. If Cmk is equal to Cm, the machine is set to produce exactly in the middle of the tolerance range (see figure below).

A normal requirement is that Cmk should be at least 1.67.

Process capability Process capability is a long-term study, measured in Cp and Cpk, that shows how well a process is performing  in relation to the tolerance limits while the study is in progress, as well as indicating likely performance in the immediate future.

You could say that process capability is the sum of a index of machine capabilities measured over a period of time .

When measuring process capability, you must include everything that affects the process, i.e. all six factors.

Cp (capability process)

The Cp index describes process capability; it is the number of times the spread of the process fits into the tolerance width. The higher the value of Cp, the better the process.

Example: if Cp = 2.5, the spread of the process fits 2½ times into the tolerance width, while Cp = 1 means that the spread is equal to the tolerance width.

Note that even if the spread is off-centre, it is still the same size (Cp index). The figure takes no account of where the spread is positioned in relation to the upper and lower tolerance limits, but simply expresses the relation ship between the width of the spread and the tolerance width (see figure below).

Cpk (capability process index)

If you also want to study the position of the process in relation to the tolerance limits, you use the Cpk index, which describes the process capability corrected for position. It is not much use having a high Cp index if the process setting is way off centre in relation to the middle of the tolerance range.

A high Cpk index means, then, that you have a good process with a small spread in relation to the tolerance width, and also that it is well centred within that width. If Cpk is equal to Cp, the process is set to produce exactly in the middle of the tolerance range (see figure below).

A normal requirement is that Cpk should be at least 1.33.

Control charts are an efficient way of analyzing performance data to evaluate a process. Control charts have many uses; they can be used in manufacturing to test if machinery are producing products within specifications. Also, they have many simple applications such as professors using them to evaluate tests scores. To create a control chart, it is helpful to have Excel; it will simplify your life.

Steps in preparing a control chart: are as follows:

Check to see that your data meets the following criteria: Data should usually be normally distributed revolving around a mean (average).

In the example below, a bottle company fills their bottles to 16 oz. (mean); they are evaluating if their process is “in-control”. The amount in ounces over 16 oz. is normally distributed around the mean.

Measurements need to be independent of one another.

In the example, the measurements are in subgroups. The data in the subgroups should be independent of the measurement number; each data point will have a subgroup and a measurement number.

Find the mean of each subgroup To find the mean, add all measurements in the subgroup and divide by the number of measurements in the subgroup.

In the example, there are 20 subgroups and in each subgroup there are 4 measurements.

Find the mean of all means

This will give you the overall mean of all the data points.

The overall mean will be the centerline in the graph (CL), which is 13.75 for our example.

Calculate the standard deviation (S) of the data points .

Calculate the upper and lower control limits (UCL, LCL) using the following formula:

UCL = CL + 3*SLCL = CL – 3*SThe formula represents 3 standard deviations above and 3 standard deviations below the mean respectively.

Refer to the below chart with steps given below.

Draw a line at each deviation.

In the above example, there is a line drawn at one, two, and three standard deviations (sigma’s) away from the mean.

Zone C is 1 sigma away from the mean (green).

Zone B is 2 sigma away from the mean (yellow).

Zone A is 3 sigma away from the mean (red).

Graph the X-bar Control Chart, by graphing the subgroup means (x-axis) verses measurements (y-axis). Your graph should look like something like this:

Evaluate the graph to see if the process is out-of-control. The graph is out-of-control if any of the following are true:

Any point falls beyond the red zone (above or below the 3-sigma line). 8 consecutive points fall on one side of the centerline. 2 of 3 consecutive points fall within zone A. 4 of 5 consecutive points fall within zone A and/or zone B. 15 consecutive points are within Zone C. 8 consecutive points not in zone C

(Guidelines are available on AT&T Standards)

Control limits

Control limits are an important aspect of statistical process control. They have nothing to do with tolerance limits, because they are designed to call your attention when the process changes its behaviour.

An important principle is that control limits are used along with the mean value on the control graph to control the process, unlike tolerance limits, which are used along with individual measurements to determine whether a given part meets specifications or not.

The function of control limits is to centre the process on the target value, which is usually the same as the middle of the tolerance width, and to show where the limit of a stable process lies. This means, in principle, that you have no reason to react until the control chart signals certain behaviour.

A commonly used control graph is the XR graph, where the position and spread of the process are monitored with the help of sub groups and control limits.

If a point falls outside a control limit on the X graph, the position of the process has changed (see figure below).

If a point falls outside a control limit on the R graph, the spread of the process has changed (see figure next page).

How are control limits determined?The correct way is to let the control limits adapt to the process. That way, a smaller spread in the process gives a narrower control zone, while a greater spread gives a wider control zone (see figure below).

It is a widespread myth that this will cause the operator to adjust the process more often, but in practice the reverse is true; the process is adjusted less often compared to operation without SPC. If you let the control limits follow the process, you will react neither too early nor too late when the behaviour of the process changes.

Other ways of determining control limitsIn some cases there may be difficulties about letting the control limits adapt to the process. One such case is where the process uses tools that are not easily adjustable, such as fixed reamers or punches.

Since such tools often produce very little variation in the process and therefore allow a narrow control zone without the possibility of adjusting the tool, it may be better to cut the control limits loose from the process and lock them to a given distance from the tolerance limits instead (see figure next page)

Acceptance sampling, the third branch of statistical quality control, refers to the process of randomly inspecting a certain number of items from a lot or batch in order to decide whether to accept or reject the entire batch. What makes acceptance sampling different from statistical process control is that acceptance sampling is performed either before or after the process, rather than during the process. Acceptance sampling before the process involves sampling materials received from a supplier, such as randomly inspecting crates of fruit that will be used in a restaurant or metal castings that will be used in a machine shop. Sampling after the process involves sampling finished items that are to be dispatched either to a customer or to a distribution center. Examples include randomly testing a certain number of computers from a batch to make sure they meet operational requirements, and randomly inspecting switches to make sure that they are not defective.

You may be wondering why we would only inspect some items in the lot and not the entire lot. Acceptance sampling is used when inspecting every item is not physically possible or would be overly expensive, or when inspecting a large number of items would lead to errors due to worker fatigue. This last concern is especially important when a large number of items are processed in a short period of time. Another example of when acceptance sampling would be used is in destructive testing, such as testing fuses or vehicles for crash testing. Obviously, in these cases it would not be helpful to test every item! However, 100 percent inspection does make sense if the cost of inspecting an item is less than the cost of passing on a defective item.

The goal of acceptance sampling is to determine the criteria for acceptance or rejection based on the size of the lot, the size of the sample, and the level of confidence we wish to attain. Acceptance sampling can be used for both attribute and variable measures, though it is most commonly used for attributes.

A control chart for variables is used to monitor characteristics that can be measured and have a continuum of values, such as height, weight, or volume. A soft drink bottling operation is an example of a variable measure, since the amount of liquid in the bottles is measured and can take on a number of different values.

A control chart for attributes, on the other hand, is used to monitor characteristics that have discrete values and can be counted. Often they can be evaluated with a simpleyes or no decision. Examples: the apple is good or rotten, the lightbulb works or it does not work

A sampling plan is a plan for acceptance sampling that precisely specifies the parameters of the sampling process and the acceptance/rejection criteria. The variables to be specified include the size of the lot (N), the size of the sample inspected from the

lot (n), the number of defects above which a lot is rejected (c), and the number of samples that will be taken.

There are different types of sampling plans. Some call for single sampling, in which a random sample is drawn from every lot. Each item in the sample is examined and is labeled as either “good” or “bad.” Depending on the number of defects or “bad” items found, the entire lot is either accepted or rejected. For example, a lot size of 50 cookies is evaluated for acceptance by randomly inspecting 10 cookies from the lot. The cookies may be inspected to make sure they are not broken or burned. If 4 or more of the 10 cookies inspected are bad, the entire lot is rejected. In this example, the lot size N= 50, the sample size n= 10, and the maximum number of defects at which a lot is accepted is c= 4. These parameters define the acceptance sampling plan.

Another type of acceptance sampling is called double sampling. This provides an opportunity to sample the lot a second time if the results of the first sample are inconclusive. In double sampling we first sample a lot of goods according to preset criteria for definite acceptance or rejection. However, if the results fall in the middle range, they are considered inconclusive and a second sample is taken. For example, a water treatment plant may sample the quality of the water ten times in random intervalsthroughout the day. Criteria may be set for acceptable or unacceptable water quality, such as .05 percent chlorine and .1 percent chlorine. However, a sample of water containing between .05 percent and .1 percent chlorine is inconclusive and calls for a second sample of water.

As we have seen, different sampling plans have different capabilities for discriminating between good and bad lots. At one extreme is 100 percent inspection, which has perfect discriminating power. However, as the size of the sample inspected decreases, so does the chance of accepting a defective lot. We can show the discriminating power of a sampling plan on a graph by means of an operating characteristic (OC) curve. This curve shows the probability or chance of accepting a lot given various proportions of defects in the lot.

The x axis shows the percentage of items that are defective in a lot. This is called “lot quality.” The y axis shows the probability or chance of accepting a lot. You can see that if we use 100 percent inspection we are certain of accepting only lots with zero defects. However, as the proportion of defects in the lot increases, our chance of accepting the lot decreases.

Regardless of which sampling plan we have selected, the plan is not perfect. That is, there is still a chance of accepting lots that are “bad” and rejecting “good” lots. The steeper the OC curve, the better our sampling plan is for discriminating between “good” and “bad.”

When 100 percent inspection is not possible, there is a certain amount of risk for consumers in accepting defective lots and a certain amount of risk for producers in rejecting good lots.

There is a small percentage of defects that consumers are willing to accept. This is called the acceptable quality level (AQL) and is generally in the order of 1–2 percent.However, sometimes the percentage of defects that passes through is higher than the AQL. Consumers will usually tolerate a few more defects, but at some point the number of defects reaches a threshold level beyond which consumers will not tolerate them. This threshold level is called the lot tolerance percent defective (LTPD). The LTPD is the upper limit of the percentage of defective items consumers are willing to tolerate.

Consumer’s risk is the chance or probability that a lot will be accepted that contains a greater number of defects than the LTPD limit. This is the probability of making a Type II error—that is, accepting a lot that is truly “bad.” Consumer’s risk or Type II error is generally denoted by beta (β). Producer’s risk is the chance or probability that a lot containing an acceptable quality level will be rejected. This is the probability of making a Type I error—that is, rejecting a lot that is “good.” It is generally denoted by alpha (α).

We can determine from an OC curve what the consumer’s and producer’s risks are. However, these values should not be left to chance. Rather, sampling plans are usually designed to meet specific levels of consumer’s and producer’s risk. For example, one common combination is to have a consumer’s risk (β) of 10 percent and a producer’s risk (α ) of 5 percent, though many other combinations are possible.

HOW MUCH AND HOW OFTEN TO INSPECT

a)Based on Product cost and product volume As you know, 100 percent inspection is rarely possible. The question then becomes one of how often to inspect in order to minimize the chances of passing on defects and still keep inspection costs manageable. This decision should be related to the product cost and product volume of what is being produced. At one extreme are high-volume, low-cost items, such as nuts and bolts, for which 100 percent inspection would not be cost justified. Also, with such a large volume 100 percent inspection would not be possible because worker fatigue sets in and defects are often passed on. At the other extreme are low volume, high-cost items, such as parts that will go into a space shuttle or be used in a medical procedure, that require 100 percent inspection.

Most items fall somewhere between the two extremes just described. For these items, frequency of inspection should be designed to consider the trade-off between the cost of

inspection and the cost of passing on a defective item. Historically, inspections were set up to minimize these two costs. Today, it is believed that defects of any type should not be tolerated and that eliminating them helps reduce organizational costs. Still, the inspection process should be set up to consider issues of product cost and volume. For example, one company will probably have different frequencies of inspection for different products.

b) Based on process stability

Another issue to consider when deciding how much to inspect is the stability of the process. Stable processes that do not change frequently do not need to be inspected often. On the other hand, processes that are unstable and change often should be inspected frequently. For example, if it has been observed that a particular type of drilling machine in a machine shop often goes out of tolerance, that machine should be inspected frequently. Obviously, such decisions cannot be made without historical data on process stability.

c) Based on lot size The size of the lot or batch being produced is another factor toconsider in determining the amount of inspection. A company that produces a smallnumber of large lots will have a smaller number of inspections than a company thatproduces a large number of small lots. The reason is that every lot should have someinspection, and when lots are large, there are fewer lots to inspect.

PLACE OF INSPECTION

Since we cannot inspect every aspect of a process all the time, another importantdecision is to decide where to inspect. Some areas are less critical than others. Following are some points that are typically considered most important for inspection.

Incoming materials

Materials that are coming into a facility from a supplier or distribution center should be inspected before they enter the production process. It is important to check the quality of materials before labor is added to it. For example, it would be wasteful for a seafood restaurant not to inspect the quality of incoming lobsters only to later discover that its lobster bisque is bad. Another reason for checking inbound materials is to check the quality of sources of supply. Consistently poor quality in materials from a particular supplier indicates a problem that needs to be addressed.

Finished goods

Products that have been completed and are ready for dispatch to customers should also be inspected. This is the last point at which the product is in the production facility. The quality of the product represents the company’s overall quality. The final quality level is what will be experienced by the customer, and an inspection at this point is necessary to ensure high quality in such aspects as fitness for use, packaging, and presentation.

Before costly processes

During the production process it makes sense to check quality before performing a costly process on the product. If quality is poor at that point and the product will ultimately be discarded, adding a costly process will simply lead to waste. For example, in the production of leather armchairs in a furniture factory, chair frames should be inspected for cracks before the leather covering is added. Otherwise, if the frame is defective the cost of the leather upholstery and workmanship may be wasted.

USE OF SQC TOOLS FOR INSPECTION

In addition to where and how much to inspect, managers must decide which tools touse in the process of inspection. As we have seen, tools such as control charts are bestused at various points in the production process. Acceptance sampling is best used forinbound and outbound materials. It is also the easiest method to use for attributemeasures, whereas control charts are easier to use for variable measures. Surveys ofindustry practices show that most companies use control charts, especially x-bar andR-charts, because they require less data collection than p-charts.

SUPPLIER QUALITY

Performance means taking inputs (such as employee work, marketplace requirements, operating funds, raw materials and supplies) and effectively and efficiently converting them to outputs deemed valuable by customers.

It’s in your best interest to select and work with suppliers in ways that will provide for high quality.

Supplier performance is about more than just a low purchase price:

The costs of transactions, communication, problem resolution and switching suppliers all impact overall cost.

The reliability of supplier delivery, as well as the supplier’s internal policies such as inventory levels, all impact supply-chain performance.

It used to be common to line up multiple suppliers for the same raw material, over concern about running out of stock or a desire to play suppliers against one another for price reductions. But this has given way, in some industries, to working more closely with a smaller number of suppliers in longer-term, partnership-oriented arrangements.

Benefits of supplier partnerships include:

Partnership arrangements with fewer suppliers mean less variation in vital process inputs.

If your suppliers have proven to be effective at controlling their output, you don’t need to monitor the supplier and their product as closely.

Establishing an effective supplier management process requires:

Support from the top management of both companies involved.

Mutual trust. Spending more money now to develop the relationship, in order to prevent

problems later.

The manufacturing industry is in a special situation: Much of what manufacturers purchase is then incorporated into their products. This means there is a higher inherent risk, or potential impact, in the manufacturing customer-supplier relationship. For this reason, manufacturers often develop detailed supplier-management processes.

Moving towards TQM philosophy

As we have seen, total quality management has impacts on every aspect of the organization. Every person and every function is responsible for quality and is affected by poor quality. For example, recall that Motorola implemented its six-sigma concept not only in the production process but also in the accounting, finance, and administrative areas. Similarly, ISO 9000 standards do not apply only to the production process; they apply equally to all departments of the company. A company cannot achieve high quality if its accounting is inaccurate or the marketing department is not working closely with customers. TQM requires the close cooperation of different functions in order to be successful.

Marketing plays a critical role in the TQM process by providing key inputs that make TQM a success. Recall that the goal of TQM is to satisfy customer needs by producing the exact product that customers want. Marketing’s role is to understand the changing needs and wants of customers by working closely with them. This requires a solid identification of target markets and an understanding of whom the product is intended for. Sometimes apparently small differences in product features can result in large differences in customer appeal. Marketing needs to accurately pass customer information along to operations, and operations needs to include marketing in any planned product changes.

Finance is another major participant in the TQM process because of the great cost consequences of poor quality. General definitions of quality need to be translated into specific costs. This serves as a baseline for monitoring the financial impact of quality efforts and can be a great motivator. Recall the four costs of quality discussed earlier. The first two costs, prevention and appraisal, are preventive costs; they are intended toprevent internal and external failure costs. Not investing enough in preventive costs can result in failure costs, which can hurt the company. On the other hand, investing too much in preventive costs may not yield added benefits. Financial analysis of these costs is critical. You can see that finance plays a large role in evaluating and monitoring the financial impact of managing the quality process. This includes costs related to preventing and eliminating defects, training employees, reviewing new products, and all other quality efforts.

Accounting is important in the TQM process because of the need for exact costing. TQM efforts cannot be accurately monitored and their financial contribution assessed if the company does not have accurate costing methods.

Engineering Department efforts are critical in TQM because of the need to properly translate customer requirements into specific engineering terms. Recall the process we

followed in developing quality function deployment (QFD). We depend on engineering to use general customer requirements in developing technical specifications, identifying specific parts and materials needed, and identifying equipment that should be used.

Purchasing is another important part of the TQM process. Whereas marketing is busy identifying what the customers want and engineering is busy translating that information into technical specifications, purchasing is responsible for acquiring the materials needed to make the product. Purchasing must locate sources of supply, ensure that the parts and materials needed are of sufficiently high quality, and negotiate a purchase price that meets the company’s budget as identified by finance.

Human Resources plays a crucial role in TQM. In fact, TQM is all about people.If we manage our people with respect, if we value them and treat them with dignity, they can help us to achieve the impossible.HR is critical to the effort to hire employees with the skills necessary to work in a TQM environment. That environment includes a high degree of teamwork, cooperation, dedication, and customer commitment. Human resources is also faced with challenges relating to reward and incentive systems. Rewards and incentives are different in TQM from those found in traditional environments that focus on rewarding individuals rather than teams.

Information Technology is highly important in TQM because of the increased need for information accessible to teams throughout the organization. IT should work closely with a company’s TQM development program in order to understand exactly the type of information system best suited for the firm, including the form of the data, the summary statistics available, and the frequency of updating. Use of tools like Excel and Statit make the use of SQC methods easy.

It must be remembered that TQM is oriented towards maintaining the competitive edge of the company. It is not a short term program, a cost-cutting exercise or a public relations gimmick. TQM is the umbrella which covers all improvement activities like SPC, Quality Circles, Customer care, Just In Time Management, Taguchi methods etc. TQM drives must be tailor made to suit the needs of a particular organization. It is not a package that can be taken from the shelf of one company and dropped neatly into another with the same impact.

Use the following RADAR technique:

R Are the ideas relevant to my company?A How would I apply each of them?D What difficulties might I meet and how would I overcome them?A Are there any additional actions that I might take ?R What resources would be needed, what would they cost, and how could they be acquired?

Toyota maintain that on average they receive 100-140 ideas per employee each year ,of which 97% are actioned.

Just Do It

There is a story about Dr.Deming being harassed over breakfast by a business journalist from America who wanted to know what was needed in the West to compete with and

outdo Japan in total quality . Apparently Dr. Deming looked up from eating, stared at the journalist coldly in the eye and told him ,”Just do it, that’s all, just do it.”

It is as simple as it is true. One pound of practice is worth tons of theory .As the adage goes a journey of ten thousand miles starts with a single step. TQM is a long tortuous process. It is not a panacea, but is the closest you will get to one. It is an unswerving commitment to constant innovation and improvement using people and science.

Let us march together to achieve excellence and success through TQM.