Quality Course 2

282
Haery Sihombing @ IP Quality Paradigm-HHIP 1 QUALITY PARADIGM: Values, Goals, Controls, Information, and Consciousness We all have needs, requirements, wants, expectations and desires. Needs are essential for life, to maintain certain standards or essential for products and services to fulfill the purpose for which they have been acquired. Requirements are what we request of others and may encompass our needs but often we don't fully realize what we need until after we have made our request. For example, now that we own a mobile phone we later discover we need hands free operation while driving and didn't think to ask at the time of purchase. Hence our requirements at the moment of sale may or may not express all our needs. Our requirements may include wants - what we would like to have but its not practical. For example, we want a cheap computer but we need a top of the range model for what we need it for. Expectations are implied needs or requirements. They have not been requested because we take them for granted - we regard them to be understood within our particular society as the accepted norm. They may be things to which we are accustomed based on fashion, style, trends or previous experience. Hence one expects sales staff to be polite and courteous, electronic products to be safe and reliable, food to be fresh and uncontaminated, tap water to be potable, policemen to be honest and people or organizations to do what they promise to do. In particular we expect goods and services to comply with the relevant laws of the country of sale and expect the supplier to know which laws apply. Desires are our innermost feelings about ourselves and our surroundings, what we would like most. Everyone nowadays is laying claim to quality. At Ford, quality is job one. GM is putting quality on the road. Chrysler makes the best built American car, and Lee Iacocca can't figure out why two cars come off the same American assembly line, and people prefer the one with the foreign label. Quality has given vanquished economies mighty post war powers. The American posture is uneasy, looking back on neighboring progress for a fix on quality. In Japan, Japanese Quality Control is a thought revolution in management. It represents a new way of thinking about management. Dr. Kaoru Ishikawa, Japan’s foremost authority in this field defines quality control as, “To practice quality control is to develop, design, produce, and service a quality product which is most economical, most useful, and always satisfactory to the customer”. Quality Control is an integral, yet often underrated concept key to successful day-to-day operation of any large or small manufacturing or factory-based company. The narrow interpretation of quality control is producing a quality product. The Japanese expand this definition in a broader sense to include: quality of work, quality of service, quality of information, quality of process, quality of division, quality of people, including workers, engineers, managers, and executives, quality of system, quality of company, and quality of objectives. 1. THE QUALITY REVOLUTION The idea of quality control began in the United States in the Post World War II era with the innovations of engineers such as Dr. W. E. Deming, Dr. J. M. Juran, and Dr. W. E. Shewhart. They developed the basic ideas of quality control and developed statistical methods for evaluating quality. Many of the quality control principles taught in American institutions of

description

The idea of quality control began in the United States in the Post World War II era with the innovations of engineers such as Dr. W. E. Deming, Dr. J. M. Juran, and Dr. W. E. Shewhart. They developed the basic ideas of quality control and developed statistical methods for evaluating quality. Many of the quality control principles taught in American institutions of higher learning today revolve around the basic principles developed by these individuals. These predecessors based their ideas mainly around improving the production processes in firms and did not expand these ideas to other functional departments in companies.

Transcript of Quality Course 2

Page 1: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 1

QUALITY PARADIGM: Values, Goals, Controls, Information, and Consciousness

We all have needs, requirements, wants, expectations and desires. Needs are essential for life, to maintain certain standards or essential for products and services to fulfill the purpose for which they have been acquired. Requirements are what we request of others and may encompass our needs but often we don't fully realize what we need until after we have made our request. For example, now that we own a mobile phone we later discover we need hands free operation while driving and didn't think to ask at the time of purchase. Hence our requirements at the moment of sale may or may not express all our needs. Our requirements may include wants - what we would like to have but its not practical. For example, we want a cheap computer but we need a top of the range model for what we need it for. Expectations are implied needs or requirements. They have not been requested because we take them for granted - we regard them to be understood within our particular society as the accepted norm. They may be things to which we are accustomed based on fashion, style, trends or previous experience. Hence one expects sales staff to be polite and courteous, electronic products to be safe and reliable, food to be fresh and uncontaminated, tap water to be potable, policemen to be honest and people or organizations to do what they promise to do. In particular we expect goods and services to comply with the relevant laws of the country of sale and expect the supplier to know which laws apply. Desires are our innermost feelings about ourselves and our surroundings, what we would like most.

Everyone nowadays is laying claim to quality. At Ford, quality is job one. GM is putting quality on the road. Chrysler makes the best built American car, and Lee Iacocca can't figure out why two cars come off the same American assembly line, and people prefer the one with the foreign label. Quality has given vanquished economies mighty post war powers. The American posture is uneasy, looking back on neighboring progress for a fix on quality.

In Japan, Japanese Quality Control is a thought revolution in management. It represents a new way of thinking about management. Dr. Kaoru Ishikawa, Japan’s foremost authority in this field defines quality control as, “To practice quality control is to develop, design, produce, and service a quality product which is most economical, most useful, and always satisfactory to the customer”. Quality Control is an integral, yet often underrated concept key to successful day-to-day operation of any large or small manufacturing or factory-based company.

The narrow interpretation of quality control is producing a quality product. The Japanese expand this definition in a broader sense to include: quality of work, quality of service, quality of information, quality of process, quality of division, quality of people, including workers, engineers, managers, and executives, quality of system, quality of company, and quality of objectives.

1. THE QUALITY REVOLUTION

The idea of quality control began in the United States in the Post World War II era with the innovations of engineers such as Dr. W. E. Deming, Dr. J. M. Juran, and Dr. W. E. Shewhart. They developed the basic ideas of quality control and developed statistical methods for evaluating quality. Many of the quality control principles taught in American institutions of

Page 2: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 2

higher learning today revolve around the basic principles developed by these individuals. These predecessors based their ideas mainly around improving the production processes in firms and did not expand these ideas to other functional departments in companies.

The Japanese, specifically, Dr. Ishikawa, decided to expand on the American ideas and relate them to the operation of every department. He expanded on these ideas with the same goal as his American colleagues, to provide a quality product to maintain a high service level and good working relationship with customers. Dr. Ishikawa expanded the idea to develop new principles for quality control: always use quality control as the basis for decisions, integrate the control of cost, price, and profit, and control quantity of stock, production, and sales, and date of delivery.

2. QUALITY PERSPECTIVES

In supplying products or services there are three fundamental parameters which determine their saleability or usability. They are price, quality and delivery. Customers require products and services of a given quality to be delivered by or be available by a given time and to be of a price which reflects value for money. These are the requirements of customers. An organization will survive only if it creates and retains satisfied customers and this will only be achieved if it offers for sale products or services which respond to customer needs, expectations, requirements and desires. Whilst price is a function of cost, profit margin and market forces and delivery is a function of the organization’s efficiency and effectiveness, quality is determined by the extent to which a product or service successfully meets the expectations, needs and requirements of the user during usage (not just at the point of sale).

2.1 Quality Goals

To control, assure and improve quality you need to focus on certain goals. Let's call them the quality goals. There follows some key actions form which specific goals may be derived:

Establish your customer needs and expectations - not doing this will certainly lead to unsatisfied customers.

Design products and services with features that reflect customer needs and expectations

Build products and services so as to faithfully reproduce the design which meets the customer's needs and expectations

Verify before delivery that your products and services possess the features required to meet the customer's needs and expectations

Prevent supplying products and services that possess features that dissatisfy customers.

Discover and eliminate undesirable features in products and services even if they possess the requisite features

Find less expensive solutions to customer needs because products and services which satisfy these needs may be too expensive.

Make your operations more efficient and effective so as to reduce costs because products and services that satisfy customer needs may cost more to produce than the customer is prepared to pay.

Discover what will delight your customer and provide it. (Regardless of satisfying customer needs your competitor may have provided products with features that give greater satisfaction!)

Establish and maintain a management system that enables you to achieve these goals reliably, repeatedly and economically.

Page 3: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 3

ISO 9001 addresses quality goals through the use of the term ‘quality objectives’ but goes no further. The purpose of a quality system is to enable you to achieve, sustain and improve quality economically. It is unlikely that you will be able to produce and sustain the required quality unless you organize yourselves to do so. Quality does not happen by chance - it has to be managed. No human endeavour has ever been successful without having been planned, organized and controlled in some way.

The quality system is a tool and like any tool can be a valuable asset. (or be abused, neglected, misused!) Depending on your strategy quality systems enable you to achieve all the quality goals. Quality systems have a similar purpose to the financial control systems, information technology systems, inventory control systems, personnel management systems. They organize resources so as to achieve certain objectives by laying down rules and an infrastructure which, if followed and maintained, will yield the desired results. Whether it is the management of costs, inventory, personnel or quality, systems are needed to focus the thought and effort of people towards prescribed objectives. Quality systems focus on the quality of what the organization produces, the factors which will cause the organization to achieve its goals, the factors which might prevent it satisfying customers and the factors which might prevent it from being productive, innovative and profitable. Quality systems should therefore cause conforming product and prevent nonconforming product.

Quality systems can address one of the quality goals or all of them, they can be as small or as large as you want them to be. They can be project specific, or they can be limited to quality control that is, maintaining standards rather than improving them. They can include Quality Improvement Programmes (QIPs) or encompass what is called Total Quality Management (TQM). This book, however, only addresses one type of quality system - that which is intended to meet ISO 9000 which currently focuses on the quality of the outgoing product alone

2.2 Achieving Quality

There are two schools of thought on quality management. One views quality management as the management of success and the other the elimination of failure. They are both valid. Each approaches the subject from a different angle. In an ideal world if we could design products, services and processes that could not fail we would have achieved the ultimate goal. Failure means not only that products, services and processes would fail to fulfill their function but that their function was not what our customers desired. A gold plated mousetrap that does not fail is not a success if no one needs a gold plated mousetrap!

We have only to look at the introductory clauses of ISO 9001 to find that the aim of the requirements is to achieve customer satisfaction by prevention of nonconformities. Hence quality management is a means for planning, organizing and controlling the prevention of failure. All the tools and techniques that are used in quality management serve to improve our ability to succeed in our pursuit of excellence.

Quality does not appear by chance or if it does it may not be repeated. One has to design quality into the products and services. It has often been said that one cannot inspect quality into a product. A product remains the same after inspection as it did before so no amount of inspection will change the quality of the product. However, what inspection does is measure quality in a way that allows us to make decisions on whether to release a piece of work. Work that passes inspection should be quality work but inspection unfortunately is not 100% reliable. Most inspection relies on the human judgment of the inspector and human judgment

Page 4: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 4

can be affected by many factors some of which are outside our control such as the private life, health or mood of the inspector. We may fail to predict the effect that our decisions have on others. Sometimes we go to great lengths in preparing organization changes and find to our surprise that we neglected something or underestimated the effect of something. So we need other means than inspection to deliver quality products. It is costly anyway to rely only on inspection to detect failures - we have to adopt practices that enable us to prevent failures from occurring. This is what quality management is all about.

Quality management is both a technical subject and a behavioral subject. It is not a bureaucratic administrative technique. The rise in popularity of ISO 9000 has created some unhelpful messages such as the 'document what you do' strategy. There has also been a perception in the service industries that ISO 9000 quality systems only deal with the procedural aspects of a service and not the professional aspects. For instance in a medical practice, the ISO 9000 quality system is often used only for processing patients and not for the medical treatment. In legal practices, the quality system again has been focused only on the administrative aspects and not the legal issues. The argument for this is that there are professional bodies that deal with the professional side of the business. In other words the quality system addresses the non-technical issues and the profession the technical issues. This is not quality management. The quality of the service depends upon both the technical and non-technical aspects of the service. Patients who are given the wrong advice would remain dissatisfied even if his/her papers were in order or even if he/she were given courteous attention and informed of the decision promptly. To achieve quality one has to consider both the product and the service. A faulty product delivered on time within budget and with a smile remains a faulty product.

Another often forgotten aspect of quality management is the behavior of people in an organization. Such behavior is formed by the core values to which that organization subscribes. The absence of core values that form a positive behavior, may not have an immediate effect because individuals will operate according to their own personal values. When these conflict with the organization's values, an individual could resent being forced to comply and may eventually adopt the values of the majority or leave to find a more suitable company to work for.

The management of quality involves many aspects of an organization. In essence quality management is concerned with the failure potential of processes, products and services as stated previously. Organizations comprise many functions and all must be essential for the organization to function efficiently and effectively. It follows therefore that if any function fails to perform, there will be a corresponding detrimental effect on the organization. Whether this failure has any effect on the products and services offered for sale depends on the time taken for the effect to be damaging. Some failures have an immediate effect where they contribute directly to the supply of products and services. Others have a long term effect where their contribution is indirect such as the behavioral aspects. People work best when management shows it cares about them. Neglect the people and you eventually impact product quality. A failure in a support function, such as office cleaning, may not affect anything initially, but if the office remains unclean for a prolonged period it will begin to have an effect on productivity.

If a Total Quality Management philosophy is to be adopted, every function in the organization regardless of the magnitude of its effect on processes, products and services is brought into the system. ISO 9000 only addresses those functions that contribute directly to

Page 5: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 5

the sale of products and services to customers. The difference is that ISO 9000 and other standards used in a regulatory manner are not concerned with an organization's efficiency or effectiveness in delivering profit. However, they are concerned indirectly with nurturing the values that determine the behavior of the people who make decisions that affect product or service quality.

2.3 Quality Management

One of the most important issues that businesses have focused on in the last 20-30 years has been quality. As markets have become much more competitive - quality has become widely regarded as a key ingredient for success in business. In this revision note, we introduce what is meant by quality by focusing on the key terms you will come up against.

What is quality? You will comes across several terms that all seem to relate to the concept of quality. It can be quite confusing working out what the difference is between them. We've defined the key terms that you need to know below:

Term Description

Quality

Quality is first and foremost about meeting the needs and expectations of customers. It is important to understand that quality is about more than a product simply "working properly".

Think about your needs and expectations as a customer when you buy a product or service. These may include performance, appearance, availability, delivery, reliability, maintainability, cost effectiveness and price.

Think of quality as representing all the features of a product or service that affect its ability to meet customer needs. If the product or service meets all those needs - then it passes the quality test. If it doesn't, then it is sub-standard.

QualityManagement

Producing products of the required quality does not happen by accident. There has to be a production process which is properly managed. Ensuring satisfactory quality is a vital part of the production process.

Quality management is concerned with controlling activities with the aim of ensuring that products and services are fit for their purpose and meet the specifications. There are two main parts to quality management

(1) Quality assurance

(2) Quality control

QualityAssurance

Quality assurance is about how a business can design the way a product of service is produced or delivered to minimize the chances that output will be sub-standard. The focus of quality assurance is, therefore on the product design/development stage.

Why focus on these stages? The idea is that - if the processes and procedures used to produce a product or service are tightly controlled - then quality will be "built-in". This will make the production process much more reliable, so there will be less need to inspect production output (quality control).

Quality assurance involves developing close relationships with customers and suppliers. A business will want to make sure that the suppliers to its production process understand exactly what is required - and deliver!

QualityControl

Quality control is the traditional way of managing quality. A further revision note (see the list on the right) deals with this in more detail.

Quality control is concerned with checking and reviewing work that has been done. For example, this would include lots of inspection, testing and sampling.

Quality control is mainly about "detecting" defective output - rather than preventing it. Quality control can also be a very expensive process. Hence, in recent years, businesses have focused on quality management and quality assurance.

Page 6: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 6

Total Quality Management

Total quality management (usually shortened to "TQM") is a modern form of quality management. In essence, it is about a kind of business philosophy which emphasizes the need for all parts of a business to continuously look for ways to improve quality. We cover this important concept in further revision notes.

2.4 Quality Control

Quality control is the more traditional way that businesses have used to manage quality. Quality control is concerned with checking and reviewing work that has been done. But is this the best way for a business to manage quality?

Under traditional quality control, inspection of products and services (checking to make sure that what's being produced is meeting the required standard) takes place during and at the end of the operations process.

There are three main points during the production process when inspection is performed:

1 When raw materials are received prior to entering production

2 Whilst products are going through the production process

3When products are finished - inspection or testing takes place before products are dispatched to customers

The problem with this sort of inspection is that it doesn't work very well!

There are several problems with inspection under traditional quality control:

1 The inspection process does not add any "value". If there were any guarantees that no defective output would be produced, then there would be no need for an inspection process in the first place!

2 Inspection is costly, in terms of both tangible and intangible costs. For example, materials, labor, time, employee morale, customer goodwill, lost sales

3 It is sometimes done too late in the production process. This often results in defective or non-acceptable goods actually being received by the customer

4 It is usually done by the wrong people - e.g. by a separate "quality control inspection team" rather than by the workers themselves

5 Inspection is often not compatible with more modern production techniques (e.g. "Just in Time Manufacturing") which do not allow time for much (if any) inspection.

6 Working capital is tied up in stocks which cannot be sold

7 There is often disagreement as to what constitutes a "quality product". For example, to meet

quotas, inspectors may approve goods that don't meet 100% conformance, giving the message to workers that it doesn't matter if their work is a bit sloppy. Or one quality control inspector may follow different procedures from another, or use different measurements.

As a result of the above problems, many businesses have focused their efforts on improving quality by implementing quality management techniques - which emphasize the role of quality assurance. As Deming (a "quality guru") wrote:

"Inspection with the aim of finding the bad ones and throwing them out is too late, ineffective,

costly. Quality comes not from inspection but from improvement of the process."

The ISO definition states that quality control is the operational techniques and activities that are used to fulfill requirements for quality. This definition could imply that any activity whether serving the improvement, control, management or assurance of quality could be a quality control activity. What the definition fails to tell us is that controls regulate

Page 7: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 7

performance. They prevent change and when applied to quality regulate quality performance and prevent undesirable changes in the quality standards. Quality control is a process for maintaining standards and not for creating them. Standards are maintained through a process of selection, measurement and correction of work, so that only those products or services which emerge from the process meet the standards. In simple terms quality control prevents undesirable changes being present in the quality of the product or service being supplied. The simplest form of quality control is illustrated in the Figure below. Quality control can be applied to particular products, to processes which produce the products or to the output of the whole organization by measuring the overall quality performance of the organization.

Quality control is often regarded as a post event activity. i.e. a means of detecting whether quality has been achieved and taking action to correct any deficiencies. However, one can control results by installing sensors before, during or after the results are created. It all depends on where you install the sensor, what you measure and the consequences of failure. Some failures cannot be allowed to occur and so must be prevented from happening through rigorous planning and design. Other failures are not so critical but must be corrected immediately using automatic controls or fool proofing. Where the consequences are less severe or where other types of sensor are not practical or possible, human inspection and test can be used as a means of detecting failure. Where failure cannot be measured without observing trends over longer periods, one can use information controls. They do not stop immediate operations but may well be used to stop further operations when limits are exceeded. If you have no controls then quality products are produced by chance and not design. The more controls you install the more certain you are of producing products of consistent quality but there is balance to be achieved. Beware of the law of diminishing returns.

It is often deemed that quality assurance serves prevention and quality control detection, but a control installed to detect failure before it occurs serves prevention such as reducing the tolerance band to well within the specification limits. So quality control can prevent failure. Assurance is the result of an examination whereas control produces the result. Quality Assurance does not change the product, Quality Control does.

Quality Control is also a term used as a name of a department. In most cases Quality Control Departments perform inspection and test activities and the name derives from the authority that such departments have been given. They sort good products from bad products and authorize the release of the good products. It is also common to find that Quality Control Departments perform supplier control activities which are called Supplier Quality Assurance

Page 8: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 8

or Vendor Control. In this respect they are authorized to release products from suppliers into the organization either from the supplier's premises or on receipt in the organization.

Since to control anything requires the ability to effect change, the title Quality Control Department is in fact a misuse of the term since such departments do not in fact control quality. They do act as a regulator if given the authority to stop release of product, but this is control of supply and not of quality. Authority to change product usually remains in the hands of the producing departments. It is interesting to note that similar activities within a Design Department are not called quality control but Design Assurance or some similar term. Quality Control has for decades been a term applied primarily in the manufacturing areas of an organization and hence it is difficult to change peoples perceptions after so many years of the terms incorrect use.

In recent times the inspection and test activities have been transferred into the production departments of organizations, sometimes retaining the labels and sometimes reverting to the inspection and test labels.

Control of quality, or anything else for that matter, can be accomplished by the following steps:

Determine what parameter is to be controlled.

Establish its criticality and whether you need to control before, during or after results are produced.

Establish a specification for the parameter to be controlled which provides limits of acceptability and units of measure.

Produce plans for control which specify the means by which the characteristics will be achieved and variation detected and removed.

Organize resources to implement the plans for quality control.

Install a sensor at an appropriate point in the process to sense variance from specification.

Collect and transmit data to a place for analysis.

Verify the results and diagnose the cause of variance.

Propose remedies and decide on the action needed to restore the status quo.

Take the agreed action and check that the variance has been corrected

2.5 Quality Assurance

The ISO definition states that quality assurance is all those planned and systematic actions necessary to provide adequate confidence that an entity will fulfill requirements for quality. Both customers and managers have a need for quality assurance as they cannot oversee operations for themselves. They need to place trust in the producing operations, thus avoid constant intervention.

Customers and managers need:

Knowledge of what is to be supplied (This may be gained from the sales literature, contract or agreement)

Knowledge of how the product or service is intended to be supplied (This may be gained from the supplier’s proposal or offer)

Knowledge that the declared intentions will satisfy customer requirements if met (This may be gained from personal assessment or reliance on independent certifications)

Page 9: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 9

Knowledge that the declared intentions are actually being followed (This may be gained by personal assessment or reliance on independent audits)

Knowledge that the products and services meet your requirements (This may be gained by personal assessment or reliance on independent audits)

You can gain an assurance of quality by testing the product/service against prescribed standards to establish its capability to meet them. However, this only gives confidence in the specific product or service purchased and not in its continuity or consistency during subsequent supply. Another way is to assess the organization which supplies the products/services against prescribed standards to establish its capability to produce products of a certain standard. This approach may provide assurance of continuity and consistency of supply

Quality assurance activities do not control quality, they establish the extent to which quality will be, is being or has been controlled. This is born out by ISO 8402 1994 where it is stated that quality control concerns the operational means to fulfill quality requirements and quality assurance aims at providing confidence in this fulfillment both within the organization and externally to customers and authorities. All quality assurance activities are post event activities and off line and serve to build confidence in results, in claims, in predictions etc. If a person tells you they will do a certain job for a certain price in a certain time, can you trust them or will they be late, overspent and under spec?. The only way to find out is to gain confidence in their operations and that is what quality assurance activities are designed to do. Quite often, the means to provide the assurance need to be built into the process such as creating records, documenting plans, specifications, reviews etc. Such documents and activities also serve to control quality as well as assure it (see also ISO 8402 ). ISO 9001 provides a means for obtaining an assurance of quality, if you are the customer, and a means for controlling quality, if you are the supplier.

Quality assurance is often perceived as the means to prevent problems but this is not consistent with the definition in ISO 8402. In one case the misconception arises due to people limiting their perception of quality control to control during the event and not appreciating that you can control an outcome before the event by installing mechanisms to prevent failure such as automation, mistake-proofing, failure prediction etc. Juran provides a very lucid analysis of control before, during and after the event in Managerial Breakthrough.

In another case, the misconception arises due to the label attached to the ISO 9000 series of standards. They are sometimes known as the Quality Assurance standards when in fact, as a family of standards, they are Quality System standards. The requirements within the standards do aim to prevent problems and hence the association with the term Quality Assurance. Only ISO 9001, ISO 9002 and ISO 9003 are strictly Quality Assurance Standards. It is true that by installing a quality system, you will gain an assurance of quality, but assurance comes about through knowledge of what will be, is being or has been done, rather than by doing it. Assurance is not an action but a result. It results from obtaining reliable information that testifies the accuracy or validity of some event or product. Labelling the prevention activities as Quality Assurance activities may have a negative effect particularly if you have a Quality Assurance Department. It could send out signals that the aim of the Quality Assurance Department is to prevents things from happening! Such a label could unintentionally give the department a law enforcement role.

Page 10: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 10

Quality Assurance Departments are often formed to provide both customer and management with confidence that quality will be, is being and has been achieved. However, another way of looking upon Quality Assurance departments is as Corporate Quality Control. Instead of measuring the quality of products they are measuring the quality of the business and by doing so are able to assure management and customers of the quality of products and services.

Assurance of quality can be gained by the following steps illustrated diagrammatically in the Figure below:

Acquire the documents which declare the organization’s plans for achieving quality.

Produce a plan which defines how an assurance of quality will be obtained i.e. a quality assurance plan.

Organize the resources to implement the plans for quality assurance.

Establish whether the organization’s proposed product or service possesses characteristics which will satisfy customer needs.

Assess operations, products and services of the organization and determine where and what the quality risks are.

Establish whether the organization’s plans make adequate provision for the control, elimination or reduction of the identified risks.

Determine the extent to which the organization’s plans are being implemented and risks contained.

Establish whether the product or service being supplied has the prescribed characteristics.

In judging the adequacy of provisions you will need to apply the relevant standards, legislation, codes of practices and other agreed measures for the type of operation, application and business. These activities are quality assurance activities and may be subdivided into design assurance, procurement assurance, manufacturing assurance etc. Auditing, planning, analysis, inspection and test are some of techniques which may be used. ISO-9000 is a quality assurance standard, designed for use in assuring customers that suppliers have the capability of meeting their requirements

2.6 Quality Improvement

The ISO definition of quality improvement states that it is the actions taken throughout the organization to increase the effectiveness of activities and processes to provide added benefits to both the organization and its customers. In simple terms, quality improvement is anything which causes a beneficial change in quality performance. There are two basic ways of bringing about improvement in quality performance. One is by better control and the other by raising standards. We don't have suitable words to define these two concepts. Doing better what you already do is improvement but so is doing something new. Juran uses the term control for maintaining standards and the term breakthrough for achieving new standards. Imai uses the term Improvement when change is gradual and Innovation when it is radical. Hammer uses the term Reengineering for the radical changes. All beneficial change results in improvement whether gradual or radical so we really need a word which means gradual change or incremental change. The Japanese have the word Kaizen but there is no English equivalent that I know of other than the word improvement.

Quality improvement (for better control) is a process for changing standards. It is not a process for maintaining or creating new standards. Standards are changed through a process of selection, analysis, corrective action on the standard or process, education and training. The standards which emerge from this process are an improvement from those used

Page 11: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 11

previously. A typical quality improvement might be to increase the achieved reliability of a range of products from 1 failure every 1000 hours to meet the specified target of 1 every 5000 hours. Another might be to reduce service call-out response time from an average of 38 hours to the maximum of 36 hours specified. Another might be simply to correct the weaknesses in the registered quality system so that it will pass re-assessment.

Quality improvement (raising standards or Innovation), is a process for creating new standards. It is not a process for maintaining or improving existing standards. Standards are created through a process which starts at a feasibility stage and progresses through research and development to result in a new standard proven for repeatable applications. Such standards result from innovations in technology, marketing and management. A typical quality improvement might be to redesign a range of products to increase the achieved reliability from 1 failure every 5000 hours to 1 failure every 10,000 hours. Another example might be to improve the efficiency of the service organization so as to reduce the guaranteed call-out time from the specified 36 hours to 24 hours. A further example might be to design and install a quality system which complies with ISO 9001.

The transition between where quality improvement stops and quality control begins is where the level has been set and the mechanisms are in place to keep quality on or above the set level. In simple terms if quality improvement reduces quality costs from 25% of turnover to 10% of turnover, the objective of quality control is to prevent the quality costs rising above 10% of turnover. This is illustrated below.

Improvement by better control is achieved through the corrective action mechanisms and by raising standards that requires a different process. A process which results in new standards which is improving quality by raising standards can be accomplished by the following steps illustrated diagrammatically in the figure below

Determine the objective to be achieved. e.g. new markets, products or technologies or new levels of organizational efficiency or managerial effectiveness, new national standards or government legislation. These provide the reasons for needing change.

Determine the policies needed for improvement. i.e. the broad guidelines to enable management to cause or stimulate the improvement.

Conduct a feasibility study. This should discover whether accomplishment of the objective is feasible and propose several strategies or conceptual solutions for consideration. If feasible, approval to proceed should be secured.

Produce plans for the improvement which specify the means by which the objective will be achieved.

Organize the resources to implement the plan.

Carry out research, analysis and design to define a possible solution and credible alternatives.

Model and develop the best solution and carry out tests to prove it fulfils the objective.

Identify and overcome any resistance to the change in standards.

Implement the change i.e. putting new products into production and new services into operation.

Put in place the controls to hold the new level of performance.

Page 12: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 12

This improvement process will require controls to keep improvement projects on course towards their objectives. The controls applied should be designed in the manner described previously.

The inherent characteristics of a product or service created to satisfy customer needs, expectations and requirements are quality characteristics. Physical and functional characteristics such as weight, shape, speed, capacity, reliability, portability, taste etc. Price and delivery are assigned characteristics and not inherent characteristics of a product or service and are both transient features whereas the impact of quality is sustained long after the attraction or the pain of price and delivery has subsided. Quality characteristics that are defined in a specification are quality requirements and hence any technical specification for a product or service that is intended to reflect customer expectations, needs and requirements is a quality requirement, Most people simply regard such requirements as product or service requirements and calling them 'quality' requirements introduces potential for misunderstanding.

3 IMPLEMENTATION OF QUALITY CONTROL

Dr. Ishikawa attempted to standardize the procedure of implementing quality control by setting guidelines for implementation:

1. Engage in quality control with the goal of manufacturing products with the quality, which can satisfy the requirements of consumers. Just meeting the standards or specifications of national organizations is not the answer. These standards and specifications are not perfect and do not always satisfy customer requirements. Customers change their requirements often as the market and consumers dictate and national organizations do not set standards that keep up with customer requirements.

2. Companies must emphasize consumer orientation. Companies aim to develop the best products and then think that they are doing everyone a favor by providing quality products. This approach should be changed to take into account customer requirements and preferences during the design, production, and sale of products .“The customer is always right” should factor into these decisions.

Page 13: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 13

3. Interpret quality in the broad sense to take into account all departments of the company versus concentrating solely on the production department.

4. Maintaining a competitive price regardless of how high the level of quality is .Despite the quality, customers will not purchase excessively expensive products. Companies cannot implement quality that ignores price, profit, and cost control. These companies should aim to produce a product with just quality, just price, and just amount.

3.1. Proceeding with Quality Control

Proceeding with quality control involves several steps first described by the American, Dr. Fredrick W. Taylor. Taylor used the following to describe control, “plan-do-see. ”Dr. Ishikawa further developed this approach by rewording the method, plan-do-check-action.

This involves several steps:

1. Determine goals and targets 2. Determine methods of reaching goals 3. Engage in education and training 4. Implement work 5. Check the effects of implementation 6. Take appropriate action (59)

Those six steps follow the plan-do-check-action approach modified by Dr. Ishikawa. The first two steps involve the planning aspect, steps three and four follow the doing aspect, step five is the checking step, and step six is the action portion of the method. Each of these steps involves a great detail of work.

Unless top management determines standardized policies, the company will not reach its goals. These policies need mutual understanding and adherence by all subordinates at every level in the hierarchical tree. Direct subordinates need to full understand these policies in order to implement and enforce them in their individual departments. The mutual understanding by all employees puts everyone on the same page. This allows the company to proceed towards accomplishing its goals. Companies should set goals based on their problems at a company level and set specific limits on the goals. This makes the goals more explicit to everyone striving to attain them. With that said, top management should take into account the feelings and opinions of their subordinates when setting these guidelines.

Determining the methods of reaching the goals set forth by top management involves the standardization of work. An idea can work for an individual, but if those around him do not adopt it confusion will arise, hence, defeating the purpose of standardized work. The mangers should again take the subordinates input into account for more effective implementation of the standardized procedures. This approach looks at potential problems and the standardization solves and eliminates these problems before they ever surface.

Engaging in education and training eliminates many human errors before they occur. Good goals and planning are useless unless the workers understand what they need to do and exactly how to do it. This does not mean classroom work exclusively. This step involves everything from one-on-one meetings between management and subordinates to classroom work to hands-on training with equipment and machinery. This step also involves the personal nurturing of every employee. Building a mutual trust between management and subordinates is vital to a cooperative team effort.

Page 14: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 14

If management follows the previous three steps, implementation should not pose any significant, unfixable problems. Problems will arise at every step in the process, but management should solve these problems immediately as they surface.

The last two steps involve observing any negative effects that implementation presents and taking appropriate action to correct them. These steps involve finding the root or cause of any problem and then systematically solve them in a standardized manner. Solutions presented must not only solve the problem, but must also prevent the recurrence of this problem.

3.2. Total Quality Control

In short, Total Quality Control (TQC) is the management of control itself. First suggested by Feigenbaum, TQC referred to the job of management in maintaining and managing their subordinates and their quality control system. The Japanese approach to TQC differs from Dr. Feigenbaum’s approach in that the Japanese involve this type of management at all levels of the company. The Japanese encourage all employees in every department to involve themselves in studying and promoting quality control. This Japanese designate their approach by the name company-wide quality control.

Companies all have different goals for TQC and reasons for implementing it; however, they all have similar purposes summarized by Dr. Ishikawa:

1. Improve the corporate health and character of the company 2. Combine the efforts of every employee, achieving participation by all, and

establishing a cooperative system 3. Establish a quality assurance system and obtain the confidence of customers and

consumers 4. Aspire to achieve the highest quality in the world and develop new products for that

purpose alone 5. Show respect for humanity, nurture human resources, consider employee happiness,

provide cheerful workplaces, and pass the torch to the next generation 6. Establish a management system that can secure profit in times of slow growth and can

meet various challenges 7. Utilize quality control techniques (97)

4. THE JAPANESE ADVANTAGE

It is well known in industry on a global level that the Japanese are the industry leaders in TQC. There are several reasons for their advantage .It begins, however, with the differences in Japanese quality control from that of the West.

In 1967, the seventh Quality Control Symposium determined six distinguishing characteristics of Japanese quality control:

1. Company-wide quality control 2. Education and training in quality control 3. QC circle activities 4. QC audits5. Utilization of statistical methods 6. Nationwide quality control promotion activities

Page 15: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 15

There also exist several societal and work related differences in attitude and practice that provide the Japanese with a significant advantage over their Western counterparts.

The differences in professionalism provide the Japanese with an advantage. Professionalism and specialization is common and encouraged in Western cultures. QC specialists run the quality departments in companies and make the decisions regarding those issues .In Japan, companies rotate workers through all divisions to give them experience in many different departments of the company. This allows a broader understanding of quality control and what it takes to achieve TQC.

The vertical integration of Japanese companies provides management with a better working relationship with their subordinates. Subordinates are less apprehensive about approaching management with suggestions and ideas .On the other hand, most Western companies have distinct gaps between each level of employee, which allows a communication gap to persevere. This poses many problems for Western companies when trying to get all team members on the same page aiming towards a common goal.

Labor unions influence many company decisions in the West. The unions organize themselves along functional lines. Therefore, if the welders or the machinists decide to strike, a whole company will shut down. Japanese labor unions organize themselves across an entire enterprise. These unions cross-train workers to be multi-functional. The Japanese companies nurture this type of worker, which provides greater cohesion in the team effort.

Western cultures rely too heavily on the Taylor method. This method is one of management by specialists. These specialists develop standards and specification requirements. In contrast, as previously mentioned, Japanese cross-train their workers to educate them in many facets of company operation. They also work more as a team to develop these standards and requirements.

In Japan, most workers stay with a single company for their whole working careers. This makes the workers familiar with the company and develops more of a family atmosphere. This enhances the cohesion between workers previously mentioned. In Western cultures, the job turnover rate is much higher. Workers change companies looking for pay raises and personal career development. This brings a “me” attitude to many companies, which can take away from company productivity. There are many other cultural differences between Japan and the West that also facilitate the gap that sets Japanese companies apart from their global competition.

5. QUALITY AS A VALUE

The ad men are on it daily; government agencies are asking for quality control manuals; some seek no less than total quality management; yes, the chances are you have some form of quality strategy working or planned.

Quality management is a wildly popular merit badge.

Page 16: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 16

Whose Shoes?

Give popularity a look askance; it runs on a sidetrack. What is constructive about a thing is seldom known by its popularity. Take running shoes. They are good shoes and mostly trustworthy. The people who wore the first running shoes probably were runners, but the shoes got popular. Now, mostly, the people who wear them don't run in them. On the question of who runs, the wearing of running shoes is untrustworthy information. If everyone showed up tomorrow in running shoes, don't bet on the winner by the wardrobe.

You Run in Borrowed Shoes?

There isn't a quality control manual in here. You checked, didn't you? No person in the world can give you quality from a manual.

Some mighty impressive quality control manuals can be found, and they say things finer than the rest of us even know. Still, the people wrapped in them sometimes must stand naked for their appointments in reality.

The quality paradigm explained and applied here is not a manual for following. It is a focal point for reflecting on yourself in many mirrors. Quality comes from there.

We pay too dear a price for experience that we should ever be permitted to waste it. A number of the payments we are making to that account are considered here. They were drawn from the public record that we may each recognize our own time and place in them.

IS QUALITY CONTROLLED?

I'm not at ease with this "control" business when it comes to quality. Quality Control sounds entirely too self-congratulatory. Quality assurance and quality management unsettle me, as well. They are close-ended terms, and, when I listen to people speaking them, I hear a finality in their voices that says they have it under control. But quality is an elusive prize. I can't get easy with terms that hint we can directly control it. Think that, and you could lose the richness of the quest for quality in the same way moon shot loses the sustained exhilaration of a lunar rendezvous.

The seekers of rendezvous move to choreography with complex steps, breathtaking leaps, and music; they have harmony and are open to secret possibilities. Astronauts ventured to the moon in the embrace of that. And you will be closer to quality in a rendezvous and more likely to touch and be touched by it.

The quality paradigm sounds a chord of notes to help you find the steps in quality's choreography and hear the music it plays on the strings of your consciousness.

THE PARADlGM

The connection with quality is made by decoding the human behavior of conscious beings and preparation of the individual consciousness for quality results. The paradigm is rooted in the conviction that quality is the result of conscious judgments made to achieve it, and our failures are traceable to dysfunctions in our consciousness.

Within our consciousness, values are gate keepers-opening for some choices, closing for others, and providing "just squeeze by" space for some. They are standards for choices.

Page 17: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 17

Controls are the tools we design to manage and inform us of our progress along settled goals. We understand and act on their messages in the context of all the information in our consciousness, which is open to possibilities broadly or narrowly depending on our construction and conservation of information and our individual selves.

THE VALUE STATEMENT

Management is widely charged with setting the goals for the firm, including the quality goals. We stand, but do not rest with that. Goals and values are uttered in the same breath with such nonchalance that I wonder: Do we practice conservation of values separately from goals? We should. Goals are never closer than derivatives of values, and in practice may not support professed values at all. We throw goals ahead of us to march in their direction, but why goals land where they do and why people march to those places and not others is distinguished by values.

Goals yield no common sense, and wisdom cannot be gained from their achievement or failure without knowledge and conservation of the value premises. Absent wisdom ordered by values, our marches make the picture of pachinko balls dropped into reality to score points or not in a noisy descent. The racket we make does not accumulate to our wisdom.

Value premises are distinct from goals by their formation. Goals are settlements. Both individual and organizational goals settle competition over the questions: What shall we do? What will we forego? And, what resources shall be committed?

People speak easily of established organizational values and goals. You should not glide easily over that. As settlements of the choices open to a group, organizations can be said to have common goals. But a common value premise does not arise by any such settlement. Certainly, a number of people can agree that they hold similar value premises, but that is close correlation of what value is held rather than settlement among values open for choice.

You could settle with me that to achieve the goal of a 10% productivity increase, our partnership will allocate 15% of its cash resources to computer-aided drafting equipment, and we will be bound by our goal settlement.

But could I settle with you that I am bound by your value premise: Business growth is good?

If we agreed already, that is a correlation. In fact, however, I would spend the increased productivity not on building our business, but on time to kayak the river every Friday. My value premise is: Confrontation with the object of man's desiring is good. If you insisted, and I agreed with your value premise as a condition to buying the equipment, I would conclude that, by making the value "settlement," the value premise nearest that is: Agreement with you

is good. The river is still in the lead, and you would find me there every Friday.

Does that help answer why organizations do not have values per se and suggest where to look for values?

Value premises are important to the pursuit of quality, because what is steadily valued (if experienced as wisdom) is the rail that keeps an individual locked onto the prize. Otherwise, a professional does not pore as thoughtfully over the one hundred and first shop drawing at seven thirty tonight as over the first at eight o'clock this morning, or does not faithfully complete a job awash in budget overruns. Of course, an assumption of the value held was

Page 18: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 18

made for those statements, and, if the rails are set on a side track, they run a different course to another destination.

How high or low you place the value note determines the tone of the paradigm chord. Some chords are played on the high register; others play lower.

The Settlement of Goals

All quality goals are settlements.

The settlement of goals for quality is a management task; however, it is not a solitary task. Professional service is advice for the guidance of others. Clients purchase advice, and, in the price paid, they largely settle the quality goals whether spoken or not.

I call it a settlement to draw specific attention to the fact that the level of quality is an issue between the design professional and client. It must be specifically settled, else the financial resources to deliver the quality expected will not be provided. The first chance, and perhaps the only chance to settle quality, is the negotiation of the contract for services.

Left unsettled, the client's expectations for quality will likely be quite high (or will be so stated when goals conflict), while the professional's quality goal will tend to center on the fee available to pay for it. In other cases, the prospective client's quality target may be so low that the legal minimum standard of care should preclude settlement.

If you aspire to design quality projects, seek quality in both your clients and yourself. Settle the goals specifically, and determine that they are matched. When they are mismatched, test what values they derive from and learn what tone the project will sound. You cannot play on the high register if the client is intent on other notes.

The Controls

Controls serve three primary purposes: First, a control communicates what management expects the firm's professionals will do to achieve a goal; second, a control provides information to management and the working professionals about progress toward the goal; the third purpose is to raise the stock of people who obtain quality results, which is proof manifested in people of the value held for quality.

Controls, like the goals before them, are derived.

How the notes of the paradigm play a chord is important to staying in step with quality.

The value statement says what ought to be.

The goal is management's settlement of its aspirations to the value stated by its commitment of resources to it.

The control is information about how to achieve the goal and the progress made along the path to it.

If values are not firmly held, goals will be confused, inconsistent, and ill defined. Then, controls on those goals will be pointed in the wrong direction, or whatever information they

Page 19: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 19

offer will go unheeded. Light will not cleanly separate the quality achievers from those in the shadows. You will miss the complex steps; your leaps will be into empty space; and your music will sound discordant notes. Finally, the value premises will be exposed by behavior tested in reality.

Quality strategies are typically chock full of procedures, forms, standard design details, and checklists. Particular quality management procedures, for example, a design peer review, a mock-up and test of a structural connection, the common checklist, quality performance evaluation, and the budget to do any of them are examples of controls. Typically, much stock is put in them because they are demonstrations of headway toward quality, but listing them here is not a recommendation for them. I have a different point.

Any strategy for quality is destined to fail utterly, unless controls give the opportunity and the time to act. The initiation of effective controls requires the foresight waiting for you a few pages ahead in "The Freedom to Act" and beyond. Controls are tools for anticipating hazards-that we shall have the opportunity to act in time. Do not get snagged on gadgets here; the most effective controls are those which provide information needed at critical times. You might get that just by listening to what the client is telling you (or what is left unsaid).

Controls are deliberate potentials for disquieting news. They are not the lights on the right seat vanity mirror. Controls are your lasers piercing the night you go into uneasily, sending back the information you need. We move directly, then, to information.

INFORMATION

Information is everywhere in the paradigm. Without it, we are conscious beings on perpetual standby waiting for data in grave doubt about our next move. So vital is information that theorists include it with matter and energy as an essential concept to interpret nature.

Design professionals are in particular need of an information theory, because information is exactly the component they add to the building process. Principles are needed to encode information that will empower others to act in predictable ways. No complete theory is handed to us; however, there are tools.

a. Probability

Probability is the linking theory between the sender and the receiver of information. We have need of probability, because the transfer of useful information always involves uncertainty. The measure of that uncertainty figures in the difference between the sender's and receiver's knowledge and the skill of the information encoding.

You can readily see why this is the case. If your knowledge and mine are the same, a transfer of information between us has little uncertainty, but what we exchange adds little to our knowledge. The exchange would become completely predictable, and we would end it. Ratso Rizzo put it bluntest in Midnight Cowboy by saying, "There's no use talking to you. Talking to you is just like talking to me."

The prime purpose of communication is to exchange messages that are not predictable, which is to say we want to send missing information. The outcome is always uncertain. The greater the missing information, the greater is the inherent uncertainty of the outcome, but the potential exchange is all the richer for it.

Page 20: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 20

The rich potential will not be achieved if inept encoding decreases the probability of the intended outcome. Probability will be low if the message cannot be understood. The information will not empower the receiver to act, and actions that follow may be based on information gathered from other sources. You will not have influenced the probability of the outcome (affected, yes; influenced, no). Probability will also be low if the message can be understood in so many ways that the receiver is empowered for numerous possible outcomes.

When information is structured to achieve an acceptable probability of its intended outcome, we come closest to getting something useful from our communications.

Point one is your knowledge that professional advice (drawings, specifications, letters, and conversation) supplies missing information, whose probability of intended consequences is made acceptable, if at all, by skilled encoding. Seeking an acceptable probability focuses the sender's consciousness to the receiver's missing knowledge so that the obligation to fill that gap will be assumed. Success is not measured by what is said. "I made my point!" misses the relevant investment. Dividends are paid on the action taken in response to the message received.

Uncertainty forces us to encode information in ways meaningful to the receiver: Use vocabulary already commonly understood; order information in logical blocks which prepare the receiver for complex missing information; test the possible outcomes empowered by the information beforehand; use redundancies to limit the possible outcomes.

I hear a few readers barely able to suppress a rejoinder. "Doesn't the receiver have an obligation to comprehend and tell me when there is a confusion about my information?" There is, and, when you are a receiver, you should honor the attendant obligations. The decision will be yours then, but the decision is another's when receiving from you. Do you see? You should aim to influence probability. You cannot pitch poorly on the premise that you are owed good catching.

b. Redundancy

Redundancy makes complexity more predictable. The possibility of error in complex tasks is great, because there are many possible outcomes from their attempt. You can test this by operating a transcontinental car ferry for a few minutes. Traveling by road from New York to San Francisco is complex to begin with, because there are so many possible roads. When you instruct one hundred drivers in New York to drive to San Francisco, unpredictable arrival times are likely. The system of instructions needs redundancy to enhance its predictability, which can be achieved by internal rules. More stability is achieved if only paved roads are allowed. That rule is, strictly speaking, unnecessary information for getting to San Francisco. It is redundant, and it increases order by reducing the number of possible routes. Greater order is achieved by progressively greater redundancy: Use only U.S. interstate highways; use only U.S. Interstate 80. We could go on by adding internal rules on speed, check points, and place and length of rest stops until the reduced number of possible outcomes constrained by redundancy yields acceptable predictability.

One of the reasons both drawings and specifications are issued is to surround a thing with redundancy in a way that makes the result intended more probable. Detail drawings serve a similar function. Redundancy makes the complex system of construction more predictable.

Page 21: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 21

Information theory instructs you to test the probability of your information by anticipating the number of possible outcomes from its use. Too many possible outcomes indicates unpredictability, which calls for more skillful encoding, including, perhaps, greater redundancy.

Finally, an acceptable probability requires knowledge that no communication is ever completely secure. Settled understandings are disordered over time. That is the reason we move on to entropy.

c. Entropy: From the Greek Root Meaning Transformation

Entropy is an additional concept needed to figure the probability that information will have the desired outcome. Entropy describes the tendency of all things to become less orderly when left to themselves. It is the measure of disorder in systems of matter, energy, and information.

Entropy has highly predictable applications in natural systems. For example, in a corollary to the second law of thermodynamics, energy does not alter its total quantity, but it may lose quality. A fully fired steam engine left unattended ceases to function not because total energy is less, but because its energy is dissipated from a highly regulated form in the boiler to a randomly organized form in the atmosphere. When cold, the engine's energy system is at maximum entropy.

Claud Shannon's work on information theory at Bell Telephone Laboratories, published in the July and October 1948 issues of the Bell System Technical Journal, is the prototype scientific application of entropy to information systems. Shannon theorized that information in a system tends to lose quality over time by progressive transformation from an organized form to an eventual random order. At maximum entropy, the information has so many possible arrangements that nothing useful or predictable can be made of it. Missing information is then at its maximum; outcomes are highly uncertain, and predictability is very low.

For example, a beaker of seawater in one hand and a beaker of distilled water in the other represent a highly ordered system of information about their contents. The information is significantly disordered when I mix them together in a single beaker. When the mixed beaker is drained into the ocean, complete entropy of the information is reached. The order of matter and information is completely random, and no use can be made of it.

Entropy is ubiquitous. You likely discovered its consequences when you first remarked, "When did we start doing things that way? Whatever happened to our practice on doing things this way? Who gave them the idea they could do that?" The when, whatever, and who is entropy.

Entropy is systemic. A firm with many branch offices is headed for disorder in numerous systems, which will account for some of the confusion among them. The decision to set up a new department is a decision to risk entropy in a new system. A decision to locate one firm in two buildings is just asking for trouble. Entropy, you see, is also a concern of effective architecture.

Let's check in on our car ferry business. Shannon's corollary predicts that over time our highly ordered way of getting reliable arrival times in San Francisco will fall apart: Our drivers will seek scenic diversion; they will ignore the route rules; they will overstay at

Page 22: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 22

Jesse's Highway Diner; Jesse may add new diversionary information; some maverick drivers will break the rules and chance acceptable arrival times; replacement drivers will ignore our rules and try the maverick routes. Left unattended, missing information will tend to grow, the possible outcomes will increase, predictability will decrease, and, at some point, total entropy will be reached when all arrival times are equally likely. There will be no information in the system we can do anything useful with.

d. Open and Closed Information Systems

Theorists postulate two fundamentally different kinds of energy, matter, and information systems. Open systems grow in complexity despite the demonstrable effects of entropy, while closed systems wind down on the entropy clock to eventual randomness. The cosmic scale of this notion makes a delightfully heady place to romp. Don't miss the opportunity. Solar systems that can exchange matter and energy with other systems grow, while a solar system closed to exchanges will collapse. Worlds are made and lost here. One fate or the other awaits people orbiting within information systems.

A closed information system resists exchanges. It busies itself with cybernetic enforcement of the system extant. Closed to disruptive outside influence, it may be thought stable, but the apparent stability carries the seeds of self destruction. The closed system will not change, but the surroundings do not stand still. New possibilities are unwelcome and do not spin off as innovative systems of greater complexity. Information missing in the closed system will approach maximum. The entropy alarm will sound; no one will hear. The Soviet Union is a deliberate, real life laboratory of that-with consequences. It is not by information gathered from coincidence alone that glasnost sounds at least a quarter note in the trumpeted Soviet economic reform.

An open information system is very nearly the opposite. It thrives on exchanges. While entropy is running the clock down, an open system busily spins out new subsystems of greater complexity. The clock will run on them, but they are replaced by higher orders of complexity. Open system characteristics predominate in the system that encircles the design professions and construction industry.

But you might be tempted to believe the entropy clock has run quite far if you compared sets of drawings for yesteryear's grand old buildings with a set for an ordinary recent building. The grand and old may have been built several hundred years ago from a dozen drawing sheets, while even the quick and dirty model today takes several times that. Is there that much more missing information today? Is the difference in those drawing sets the measure of entropy in the system? It probably is not.

If the design and construction information system was a closed system, then I would answer, yes, that entropy was greatly advanced. In a closed system, we would be constructing basically the same building today as a century earlier in substantially the same way, and, if it took triple the number of drawings, that would be clear indication of enormous missing information. You would expect a highly disorganized industry.

None of that is the case. The construction industry today is more highly organized than a century earlier, and its projects are increasingly complex. In its open system, load bearing masonry got transformed into structural steel or reinforced concrete, both with curtain walls, on to geodesic domes, onward to space frames, and out, we might predict, to delicately webbed cities sailing in orbit.

Page 23: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 23

Did I answer why there are more drawings? There are more because greater complexity has multiplied the potential outcomes, which design professionals endeavor to regulate by greater redundancy.

Your chore is to battle against entropy (and other sinister opponents) within a worldwide construction information system busily spinning out ever greater complexities. And it all happens not in glacial time, but so luckily that the greatest complexity has developed in your lifetime. It must be very exciting for you.

e. Something Profoundly Optimistic

Ilya Prigogine, Nobel chemistry laureate in 1977 (nonequilibrium thermodynamics), theorized that (despite entropy) ordered information can arise out of disorganization in complex open systems, and he observed in 1979 to The New York Times:

This is something completely new, something that yields a new scientific intuition about the nature of our universe. It is totally against the classical thermodynamic view that information must always degrade. It is, if you will, something profoundly optimistic.

We will want to find a personal entrance for you into the profoundly optimistic behavior of complex open systems.

Your entrance is not in the size or complexity of organizations providing architectural or engineering services taken either separately or combined. That system is minuscule in isolation. But complexity on the significant order required for profound optimism can be found. It is in the complex being that you are. Look for the door in the cognitive capacity of each person for billions of potential connections multiplied by the junctions possible with every other person. Your entrance is in the fields of mixed flowers discussed very near your next destination in Consciousness.

f. Points of Information

Information is everywhere in the paradigm, and, strategically, it is just before our time in the Consciousness. Information theory is the probability of your supplying missing information. Your consciousness embraces the capacity to see the measure of that and the many other uncertainties you face. The approach to quality is made when you act on your measurements of all the uncertainties you see to affect reality with careful, harmonious, conscious choreography.

CONCIOUSNESS

a. The Place of It

The music in the quality paradigm is all single notes until they are written in a chord. The place of consciousness is to order the notes, record them, and sing the chord. If you would hear music from stone, steel, and glass, stand before a building you admire and ponder: Why does it stand? Why does it endure? What in it stirs my admiration? An explanation for its standing might be all laws of physics, its endurance all properties of materials, and the stir in you the emotive power of elegance or grace.

In Departures, a journal ended by his death from cancer, Paul Zweig wrote that we live in a "double sphere of consciousness." The near shell is occupied with our immediate needs, "full

Page 24: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 24

of urgency, heavy with the flesh of our lives," but the outer sphere takes us, "into the future where we pretend there is time." Time in the outer sphere is our "experiment with immortality without which books would not be written and buildings would not be erected to last centuries."

Pause here and listen. The human music is playing all around you. The building you admire stands and endures because it stood first and endured first in the consciousness of its designers and builders. The building was an experiment with immortality that has lasted long enough to capture you, and it was made into art within the private estate of your own consciousness. Between you and some remarkable people, perhaps now gone, a great building has been created. It is exquisite work.

It might not have been great at all. A different consciousness by any one of you, and the building could have been ugly, dilapidated, or collapsed, but I thought you should be fortified first by beautiful music.

b. The Working Parts

Consciousness has the function to show us information inside and outside of our bodies, so we can order and evaluate it and instruct our bodies to act on it. Control over information is the hallmark of conscious beings.

The leaves of the Silphium laciniatum plant line up in a north-south direction to catch the morning and late afternoon sunshine, while avoiding the damaging mid-day sun. It is known as the compass plant for its reliability at reading sunlight and answering its genetic code.

The human nervous system has evolved more complex capacities. We are conscious not only of a perception of light, but also of the sensation of its intensity and color, the fact of the speed of light, the memory of a nap we once had in the light of a winter's day, anger at being awakened prematurely, the intention to install shades, the value of eyesight, the desire to write an ode to light. All these (perception, sensation, fact, memory, anger, intention, value, desire) are additional bits of information available to the conscious mind

Consciousness exists when specific mental phenomena are occurring and you have powers to direct the intended courses. Intentions are bits of information which order other information in the consciousness. Intentions profoundly affect other information. Their effect is to reject information, rank it, interpret, synthesize, and order our bodies to action along particular paths. As a result, our consciousness is filled and continues to grow with intentionally ordered information.

c. The Self: Fields of Mixed Flowers

All compass flowers will, in obedience to their genetic code, align their leaves on the light source. A field of them has no dissidents and, as a result, limited potential; however, each conscious being has the capacity to intentionally order information differently. The behavior that results may be quite dissimilar among individuals. There is danger in that, of course, when managed unwisely, but the potential combinations of thought and action can yield complexity on the scale of cosmic numbers. Here is your personal door to the profound optimism of complex open systems. The management of it is the key to its opening or its closing.

Page 25: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 25

Each human being carries an ordered set of information termed the self. It is the ordered set that tells you who you are. If a self set was read aloud in a crowd, someone would likely respond, "That's me, my identity! Where did you get all my private stuff?" That is how we would know whose set it was.

The self is a private estate and by far the most luxurious you can construct. It offers you command over billions of potential mental phenomena, and, when opened to connections with others, you have immense potential.

The self is a potent set. It is loaded with a lifetime of experiences: passions and pains, goals, values, convictions, and intentions. Information in that set is potent because it was ordered for the survival of the self ("That's you!"). If you do not remember ordering your potent set, it is because you had a great deal of help (some useful, some inhibiting) from family, school, society, and a very long, recorded genetic code of homo sapiens' experience on this planet.

Information that threatens the self will set off a quake in the consciousness. Adaptive strategies are quickly implemented; consequences can be ruinous and range far. The threatened self has but a few possibilities and they are urgent, heavy with the flesh of life.

Information that suppresses the self will cause a range of adaptations from passivity, resentment, to rebellion. We restrict our own opportunity when we initiate limitations in others, or when we fail to petition their potential. A thwarted self may find its final revenge in conformity. The point/counter-point between the proletariat and the withering state is: They

pretend to pay us; we pretend to work. We are more at risk from another point and counter: They pretend to hear us; we pretend to think.

d. Mixed Flowers: Our Door to Profound Optimism

The door to profoundly optimistic complex systems is opened on the hinge of a consciousness free to act. If you think something very simple was just said, I join in your thought, but its simplicity gives no clue to the stubborn determination of people to confound it. The picture they make is a room full of them dutifully closing doors with one hand on the fingers of the other in their grim determination to discover what characteristic of doors is responsible for such excruciating pain.

General Electric caught itself in a door of its own making when it decided in 1983 to mass-produce rotary refrigerator compressors. The story is told on the front page of the May 7, 1990, Wall Street Journal. The reporter found that the product development phase was under severe time pressure to gain advantages over foreign manufacturers. Based on the lab test of about 600 compressors in 1984, mass production was commenced in March, 1986. Field testing was cut from a planned twenty-four months to nine. User failure reports commenced in December, 1987, about 21 months after production. By March, 1989, defective compressors had compelled GE to take a $450 million pretax charge for about 1.3 million compressor replacements.

The lab testing had consisted of running prototypes under harsh conditions for two months. It was intended to simulate five years of normal operation. One GE testing technician of 30 years experience had repeatedly told his direct supervisors that he doubted the test conclusions. Although the units had not failed, there were visible indications of heat related distress: discolored motor windings, bearing wear, and black, crusted oil. None of the

Page 26: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 26

warnings to his supervisors (three supervisors in four years) were passed on to higher management. An independent engineer urged more severe testing, because there had been only one failure in two years. That was suspiciously low; the news was too good, but he was overruled.

Examination of failed field units disclosed a design error responsible for the early wearing of two metal parts that, you guessed it, caused excessive heat buildup leading to failure.

GE had declined to walk in the field of many flowers. It wanted an early success with every head turned the same direction, and it got that. But key cognitive connections were lost: GE had declined an offer of help from the engineer, who in the 1950s had designed the GE rotary air conditioner compressor; the testing technicians and independent engineer were not heard; no one dared insist on the longer field tests desired, because, "It would have taken a lot of courage to tell (the GE Chairman) that we had slipped off schedule." The senior executives, walking in the field of silent flowers, petitioned only good news.

Lessons at GE were paid in money and prestige. The replacement GE Chief of Technology and Manufacturing was quoted on the lesson he took from his predecessor's experience: "I'd have gone and found the lowest damn level people we had...and just sat down in their little cubbyholes and asked them, 'How are things today?"'

Well, everybody has a story to tell. Don't you wonder what is low and little about that place? Do you wonder who is damned there? After all that passed, do you suppose "please" is too lavish an offering for the door to profound optimism?

e. Back on Our Drawing Boards

Design professionals should mark well every door to the consciousness and install freely swinging hinges in both directions. You make your services out of judgments formed in the consciousness. The primary "things" that demonstrate their worth (the drawings, specifications, your advice) cannot be better than they are in the consciousness.

I will allow that drawings and such can be checked. They could get better by checking; however, I hold to my point on the consciousness, because checking drawings is not like checking any other "thing." If you made ball bearings, all the information about their quality could be quickly known. Metallurgy would give the properties of the steel, laser measurement would test thousands of them as fast as they rolled by, maybe test a sample to destruction, and a bucket of bearings would be right to specification.

But the quality of designs is not tested that way. Obviously, it is impractical to rely on a check of the final construction for proof that the design was good. A twenty-four month field test would have helped GE, but is an untimely control for you. Instead, a drawing checker looks for indications apparent on the drawing and in key calculations, perhaps, that the preparer met the standards for design. One consciousness checks what another has revealed about itself on paper. It helps. Two heads can be better than one here, but it remains a fundamentally different check and pass than you get on ball bearings. A laser sees every ball bearing the same way. One consciousness will not see the same drawing another consciousness sees. There is both benefit and mischief in that.

Judgments in design develop one on another until what appears on paper may mask deficient work many steps back. The indications become less clear. Designs are not dismembered step

Page 27: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 27

by step when they are checked. Construction documents are not checked by duplicating the work any more than the knots in a Persian rug are tested by untying all of them.

The point is, all that you do is tied in your private estate on strings of consciousness. You have the first and best chance at quality there. No one after you will have a slate as clean and clear. We will visit inside your private estate later in Putting the Paradigm in Motion:

Learning to See, where we can explore what you might mark on your slate.

Summing the Paradigm

We readily and correctly credit glorious design and elegant engineering achievements to the finest work of our most able design professionals-to the most effective consciousness. To find the causes of failure in services that are based on judgment, we will reach closest to their roots in the dysfunction of the consciousness, where information, and the values, goals, and controls of the paradigm are ordered.

THE FREEDOM TO ACT

Reading the Race Reports

The human species is a physically weak contender, but it has taken planetary leadership by exploiting, among other things, an enriched brain.

Much is owed to the brain, yet our use of it is a paradox. Only a part of it is applied to outrun the surrounding Chaos. With that, we bravely pick our way through indifferent forces and cosmic coincidence. Another part we give over to constructing comforting race reports about the absolute security of our pack-leading position in the food chain. Entire firms, indeed whole cultures, issue themselves favorable race reports. We believe ours. We believe each other's when quite convenient.

Smug satisfaction with our own race reports is unwarranted self-congratulation. By listening to them, we lose the freedom to act creatively, decisively, and effectively. We surrender to entropy in our information system by disabling our consciousness. We stun exactly the faculty needed for agility and proficiency. We are overtaken, outmaneuvered, and the prize is lost.

a. X On the Reef

And we can lose woefully. Those who are captains of firms take a lesson! Before there was the agony wrought by the Exxon Valdez, there was an opportunity to determine its rules of operation. The reasoning against allowing passage of super tankers from Valdez, Alaska, through Prince William Sound was to many minds convincing, but it was finally silenced under a thick blanket of government and oil industry assurances of extreme safety, profound expertise, and good intentions. There was to be virtual "air traffic" quality control over the sea lanes beginning in 1977. Why that was a comfort at the time remains a mystery.

Years without fatal incident were sweet succor. Every player, from ship's captain, Exxon president, Alyeska, and the State of Alaska to the Coast Guard, readily accepted obliging race reports. Bligh Reef wasn't one meter further out of harm's way, but the consciousness had new information: The shoals near Bligh Reef had been passed hundreds of times without incident; no oil tanker had been holed in Prince William Sound; the oil fields drained into

Page 28: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 28

Valdez, Alaska, had peaked, and the campaign was on the down slope side of economic maturity.

At Exxon and Alyeska, that information was ordered by intentions to improve profits. Equipment and manpower were cut, safety and cleanup systems were disabled, and the surplus was siphoned to a thickening, black bottom line.

The Exxon Valdez had clear sailing as far behind as the eye could see. Bligh Reef waited. If the cosmos played at tag, its mechanics must have watched rapt, while, on March 24, 1989, Bligh's prize was delivered by current, wind, and human fallibility heavily laden and hard aground. The Exxon Valdez, gorged with obliging race reports, belched sickening defeat.

Are we unfair to Exxon and the Exxon Valdez? Couldn't they, after passing Bligh Reef and clearing Prince William Sound safely time after time, prudently lower their concerns and preparations? They could not, because each passage was an independent event. The conditions on one passage were not the same as the next. Variables of sea, weather, and crew preparedness were different for each, and they were different on March 24th. Only Bligh Reef was unchanged. Belief in Exxon's race reports was no more prudent than your using the same soils report for all projects on Main Street.

Repeated success with similar independent tasks enhances the qualifications for the next attempt, but it does not suspend the hazards encountered. Architects and engineers, who have designed this or that project type dozens of times, may have paramount qualifications for the work; however, they have not suspended the hazards. Success talks, but it can speak praise we should not honor.

b. Controls for the Barely Conscious

What sort of controls are expected from people slowed by their own race reports? People intent on the rear view mirror need a great deal of time to act in response to controls. They do, but they don't know it. They are cozy in a closed information system, free from the perception of uncertainty and blind to the possibilities.

Aboard the Exxon Valdez, it was a control that the crew shall: "Call the Captain's quarters for instructions abaft Bligh Reef." As a control, it was designed to give the Captain notice of the approaching reef and an opportunity to give his instructions. The goal was apparently to pass Bligh Reef without bloody running hard up the back of it.

How do you rate it? Some use a similar control: "Check the drawings before release to construction." That is a point on your chart just abaft Release Reef.

The control increased the Captain's time and opportunity to act in one way. It gave him a single point in time to check where the Valdez was in relation to Bligh Reef and the same single opportunity to correct his course. Next stop: Bligh Reef.

It was not a wise choice among so many better ones. For example, "The captain shall remain on the bridge until Bligh Reef is cleared." The time and opportunity to act is increased, and there are bonuses here: The price for a captain on the bridge is the same as a captain in his quarters; a captain at the helm more likely has a consciousness alive to the hazards than a captain on the bunk.

Page 29: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 29

Would you add a ship's pilot to Exxon's quality program? An expense authorization from Exxon is needed for that. The time for it was in 1977, when the goals were settled. It is too late now, of course. Quality is built into a project at its beginning, if at all.

c. Summing the Entropy Oblige

Cosmic machinations made Bligh Reef the winner by patient waiting, but not you and me. Humankind is competition against Chaos only as the eager, agile beneficiary of a remarkable ability to remake itself by adaptation. The ability to add experience to the consciousness, indeed, to add gainful adaptations to a flexible genetic code, are qualifications for planetary leadership. We are not obliged to listen to the siren's song of numbing contentment.

We race against the sands of time. The freedom to act is power to our legs and mind. Powers absorbed in issuing and receiving obliging race reports collect sand. Silica petrifies us; we slow, then we die.

Shaking the High Wire

a. Entrustment of Judgment Workers

The manager aiming for quality results needs to take care with the people entrusted to the project. We speak carefully here. People are neither entrusted to the manager, nor is the manager to take care of them. Judgment workers are committed to the project, and they take care of themselves.

Design professionals are judgment workers. They typify the workers fast dominating the kind of labor done in America. Specific qualities recommend them.

Judgment workers are distinctively goal directed, highly motivated individuals, who have arrived where they are by selecting ever more demanding and specialized careers. Their paths were deliberately chosen, and the personal stake in them is high. They are the guardians of their own investment in values. Paths of action supporting personal goals and values are beneficiaries of their personal stake. Contrary paths clash strongly with the potent set, which can initiate conflict and ultimately threaten the self. Unclear or conflicting goals can create conditions of siege.

Care was justifiably taken to say that judgment workers are more likely committed to a project than a "boss," and they take care not of themselves. Please note: That statement is not in the least equivalent to, "They look out for Number One." More often, the people I describe will inflict substantial penalties on Number One for the benefit of the project. Whether management receives the benefit it seeks from their work depends on the care management takes in its tasks.

A firm of consistently and skillfully managed judgment workers can achieve wonders. A firm managed with discrepant values, goals at cross-purpose, or with conflicting information can set loose its own Golem.

b. Seventy-four Seconds in Tribal Time

The seminal experience in failed engineered systems was thrust on the world by a project honored with stunning successes. The launch disaster of the space shuttle Challenger onJanuary 28,1986, the twenty-fifth shuttle program launch, was all the more shocking because of its management and engineering reputation. The presidential commission convened to

Page 30: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 30

investigate the Challenger disaster concluded that both the launch decision process and critical rocket hardware were flawed.

The facts showed that, while Challenger was fueled, manned, and ready on the launch pad, Morton Thiokol booster rocket engineers raised a launch safety issue with Thiokol management. They had observed on previous launches that the synthetic rubber O-rings sealing the joint between two lower sections of the solid rocket booster had been damaged by hot gases escaping from gaps around the O-rings. The damage was most pronounced on cold weather launches. If launched as scheduled, Challenger would make the coldest test yet; there was potential for disaster.

The position of the dissenting Thiokol engineers can be well appreciated. NASA launch safety procedures required a "Go" for launch from Thiokol. The issue raised by the engineers had consequences, not only for the immediate launch but also for the safety assessment of all prior launches, future launches, and Thiokol's reputation.

The commission disclosed that there was debate. There was consideration of information. Ultimately, the engineers would report experiencing a shift that put them on the defensive. They would say it was as if the rules had changed. Previously, the control on "Go" for launch was, "Prove it is safe, or do not launch." The Thiokol engineers in the Challenger launch circumstances perceived a new control: "Prove it is unsafe to launch, or we launch."

The consciousness of Thiokol's management was not swayed by the debate or by data that could be marshaled while Challenger waited. Finally, the engineers were asked by management to, "Take off your engineering hats, and put on your management hats." In only a short time, the engineers withdrew their safety issue, and the Thiokol "Go" for launch sent Challenger into space history.

c. Obviously a Major Malfunction

Just the thought of beginning the discussion is daunting. The events are close and emotional. This and all the cases here exacted tragic costs. Still, we are seeking lessons, and the learning of them is a small repayment to history.

The strong intentions expected from goal driven specialists were evident in the Thiokol engineers. The space shuttle program is an ultimate specialty. There is one and one only. The information they had (previous cold temperature, O-ring damage, and it's even colder now! ) got channeled by strong intentions for safety right to the top of Thiokol. That could not have been pleasant. Playing the rock in the middle of the road resists other strong information in the self: intention to be loyal to the firm, intention to support coworkers, intention to advance one's career, and intention to support prior launch decisions (when no warning was given). After twenty-four successful launches, the potent self risked a considerable setback by speaking out.

The warning was spoken, and, importantly, it was given in response to a perceived control: "Prove it is safe to launch, or do not launch." That was the discussion the engineers expected and were prepared to have. But the discussion and the control were reversed. The engineers would not likely win the case, "Prove it is unsafe," because the consciousness plainly did not prepare and order the information that way. You can hear the consciousness scream, "It's a little bloody late to change the rules here!"

Page 31: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 31

Management shook the high wire. Now, there were key judgment workers off balance. The Thiokol team had reduced effectiveness.

Now, the Golem was set free.

Changing the rules on the high wire has unsettling effects. There is new information in the consciousness: Management is inconsistent, management is unfair, management doesn't trust me, and I just might fall from here. The potent self gets punched hard when the rules are abruptly changed. When the O-rings were not proved unsafe, Thiokol's management asked the engineers to think it over, but, this time: Take off your engineering hat. Put on your management hat.

We cannot pry open the engineers' consciousness to find the twisted metal memory of that moment, but we can analyze the conflict provoked by such an instruction.

We honor a prohibition in arguing cases to the jury. The rule prohibits any attorney from arguing the Golden Rule. Counsel for plaintiff may not urge the jury to put itself in the plaintiff's shoes and decide for the plaintiff what the jury members would want decided for themselves. Defense counsel can't argue the Golden Rule in the defendant's shoes, either.

Why it is a rule is easily tested on this whip lash case: Ladies and Gentlemen of the jury, if you would but put yourself in my client's shoes, paralyzed, numb to the world of feeling from the neck down, cut off from hope for any normal life, a tormented head condemned to ride a dead body to its death-if you would do that, then, this is the question I ask of you, "If you were this person, what would you want your jury to do for you today?

How, from this position, shall a juror satisfy the duty to hear all the evidence and fairly decide the case bearing no prejudice to either side? I know I want the money, lots of it, and so do you! There may be saints (or are they brutes?) unaffected by the argument, but the system of justice does not chance it.

Next try the fit of this: Engineer, take off your engineering hat and put on your management hat: If you were faced with aborting this space launch in front of the world today, and, if you would then be required to face the press, NASA, Congress, and the President of the United States to explain why the Thiokol rocket is unsafe, what would you want your rocket engineers to do for you today?

These words, of course, were never spoken, but, under the circumstances (Challenger waiting at the "Go" line for a Thiokol release, the potentially embarrassing safety debate unresolved, the immediate need for a decision), prudent systems of management (like prudent systems of justice) do not chance the consequences of shaking the high wire in that way.

Remember what is included in the potent self set. Here are the values, goals, hopes, loyalties, ambitions, and desires. Shaking a person there sets the high wire into wildly accelerating waves. The person either holds on against wave after wave, or he lets go of the position.

And if the position is abandoned under pressure, what has management tested? Has it tested the merits of cold weather O-ring integrity? Does the question somehow clarify the

Page 32: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 32

understanding of any properties of O-rings and the behavior of combustion gases? Could it ever? If it couldn't, is it sensible to put the conflict into another's consciousness?

And if the position is abandoned, is a new set of values installed that, because of a narrow reference as an engineer, could not be seen except by playing at management? Could it ever?

The instruction does one thing: It abruptly tests whether or not a person for that moment valued agreement with management, the tribe, the hunting lodge, to such an extent that it would override confidence in a contested engineering judgment and the intentions that ordered it.

That is galling in the extreme; hold on to it for the lesson taught in architecture and engineering. The control, "Prove it is safe to launch, or we do not launch," supports the goal, "Safety First," and the value premise in the lead of that is: Human life is good.

Try that brief metaethical exercise yourself, beginning with the control, "Prove it is unsafe to launch, or we launch." Play the notes back from control to goal and search out the value premise in the lead of it.

We will wait here for you. Are you back? Now, try it again with the control, "Prove it is unsafe to build, or we build." Did it take long this time?

The test result in this instance was that Thiokol engineering did value more its agreement with Thiokol management, and it took momentary solace in that agreement from the 24 prior launches.

A key engineer remarked later, "We may have gotten too comfortable with the design." You can hear silica replace key carbon cells in the registration of obliging race reports. Entropy unchecked had progressively disordered the boundary between a safe and an unsafe O-ring design. Framed in ghostly margins, a man pitching on the high wire could just make out a "safe enough" design in a thoroughly bad one.

By all of that, Thiokol had progressively reduced its freedom to act. The opportunity to save Challenger was lost.

d. Little Boxes, Bureaucrats, and Children

The presidential commission was besieged by a NASA and Thiokol technical polemic on performance of the O-ring in cold weather. The seventy-four seconds was presented as indecipherable coincidence and accident, but Richard Feynman, Nobel physics laureate in 1965 (quantum electro-dynamics), finally cut to the chase. Feynman dunked a clamped piece of the synthetic rubber O-ring into a glass of ice water to simulate the preflight conditions. The material did not rebound; it could not fill gaps in the cold. Misjudgment had entered there through bare millimeters. Golem was Challenger's eighth and its fatal passenger.

Test and hearings done, the commission concluded that NASA must rework its launch procedures to encourage a flow of additional information from more people directly into the launch decision.

Page 33: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 33

NASA broke up some of the little boxes. That helps. The DC-3 aeroplane was designed prior to 1936 in a hanger without walls between the engineers, wing, tail, instrument designers, or anyone else. When you wanted to know something, you walked over to the person doing it, and you displayed what you were doing.

But there is a question bigger than little boxes: Whose stock was raised and whose fell by this quality disaster? The key engineer, who rose to answer the launch control, fell; he lost his job. The one and one only space shuttle program is over for him. Thiokol, however, is busy today making more NASA space shuttle rockets.

Yes, everybody has a story to tell; it turns on a question of economics, they say. It turns there, but it twists and it contorts before us on the values exposed by behavior. Children can figure the lesson, and they do.

e. But I Would Never Shake the High Wire

Actually, I don't accept that, and you don't either. There is motion in all high wires. Some anxiety makes the trip interesting, and judgment workers will seek unsettled circumstances for the challenge in them.

Challenger's circumstances are dramatic to the point of exhaustion, but the high wire sways for people in more ordinary circumstances: The project architect or engineer trying to meet a release date or a budget, leading the largest project ever, leading a project gone sour, managing the new department or profit center, opening a new market for the firm.

All involve a critical test of the self. Although tests do build strength, time on the wire is taxing. An individual without options may implement self-protection plans harmful to quality: Ignore unfavorable facts that would draw criticism, leave problems unreported to avoid early judgment, end or abbreviate the professional service when the schedule or budget is exhausted just because it is too great a hassle not to. Problems may not surface until the freedom to act is largely lost.

How to send a call for help with immunity from a self-damaging counterstrike is key to keeping people in balance on the wire.

f. Getting Help on the High Wire

Auto assembly line management uses a technique that I like. By it, management determines, with input from workers, what tasks can be done on the car at a "station" limited by two lines painted on the floor. That puts the worker on a wire. Auto carcasses are run between those particular start and goal lines, parts are lifted, placed, fitted, and fixed one after another.

Ready to start? Action! Cars are rolling, and parts are fitting quickly, tightly. It's going well, and we are making good cars today, but what happens when the worker sees that a carcass might skid across the goal line before the assigned work is done? Are we going to lose another one off the wire? Will I get the leaky windshield, will you get the magic self-opening door? Not this time. The worker will hit a button that sounds a horn, and the horn will tell the floor section foreman to get on the line and start slinging parts. Bravo!

What I like most about this is not that both management and workers contribute to goal settlement (one good idea); it is not that management pitches in (another good idea); what I

Page 34: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 34

like is that the horns go off routinely. People use them, and that says the people have trust. That's what I like!

You are on a high wire. You will put other people on high wires. None are afraid of heights, and the view up here is exhilarating. You can see buildings from here that will last for centuries. And they will, too, if every high wire you construct has a safety valve people trust enough to use regularly.

The Focus Filter

The consciousness operates full time, unless it is turned off by sleep, injury, disease, or death. While we are conscious, we have the power to receive and order information. The raw potential of our consciousness is great, but it does not mark the boundaries of our actual consciousness at any particular time.

In the 1930s, researcher Jacob Von Uexkull applied a technique in environmental studies for framing what the animal sees in the environment. Whereas, in oversight, we see the animal in the environment, the individual animal perceives the Umwelt or the self-world.

As planetary leaders, we are accustomed to seeing the environment, and we easily assume that is our world. I think we do that because we claim a victory that allows us to survey from the top of the hill. In fact, we have the greatest need to see ourselves within our Umwelten.

All the information an individual perceives about what is happening at a particular time is, for that moment, the individual's Umwelt. It is the world perceived, or the self-world. That you are in your Umwelt necessarily means that you do not see the environment.

The self-world for one person differs from any other's. From place to place and culture to culture, the differences cause people to see separate worlds in the same space.

The Inuit peoples hunt vast Arctic grounds without maps pasted to their sledges or kayaks, and they continue the hunt through the long winter even without a reliable sun fix to guide them. Take a long trek with a hunter, and figure how to return. While you and I are looking for North, East, or for Silphium laciniatum compass plants, the Inuit hunter has taken account of the direction of the wind monitored by the motion of it on his fur hood. He has noticed the ocean currents, memorized the color of the ice and ocean, the texture of the snow. You are lost in an Umwelt ended at the tip of your nose. The Inuit hunter wonders how you lived so long in your land and surmises everyone there is starving.

Back home, your Umwelt is larger, but it is still a source of troublesome flux. You concentrate on one thing, and you miss a message on another. There are ten things on your mind, and your consciousness is busy swapping intentions to let some information in, ignoring other information, and misunderstanding the next message. Your focus filter is making efficient use of your attention, or so you think, if you are aware at all.

Does your Umwelt for the critical instant encompass the clues you desperately need? You can't feel very confident. Just a minute ago you were as lost as a baby in the Arctic, and now there is something strange about your office.

Page 35: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 35

a. Do We Know What We are Doing?

That question reverberated in the aftermath of the pedestrian bridge collapse at the Kansas City Hyatt Regency Hotel, and it tolls still. One hundred thirteen people died from it in July of 1981. Design professionals are urged to that particular question, because the infamous changed bridge hanger detail staring at us from an orphaned shop drawing is exactly the kind of puzzle they are deft at solving.

Briefly, because it is everywhere reported, the design for suspension of the two stacked pedestrian bridges called for a hanger rod passing from the building roof trusses through the floor beams of the upper bridge, where the rod was connected, and continuing through to the floor beams of the lower bridge for connection there.

Notice that the load on the upper bridge connection is the weight of the upper bridge. The load of the lower bridge is on the rod. Hold on to this: One rod means one load on the upper bridge connection, which we will call Connection "A". The lower bridge hangs on the rod and not on the upper bridge.

A steel fabricator's shop drawing was generated, which changed the design by calling for two rods. The first rod passed through the upper bridge floor beams, where it was connected, and that is Connection "A". So far just like the first design and the same upper bridge Connection "A" has exactly the load put on it by the original design-one load.

But, a second rod was started next to the rod just connected, and it was passed through the lower bridge deck beams where it was connected. Trace the load again. Now the lower bridge is connected to the upper bridge. The load that stayed in the rod on the original design was now hooked onto the upper bridge. As a consequence, there were two loads on the upper bridge Connection "A" and the bridges failed when the rod pulled through the upper bridge deck beams. That is the "Double Load" reason the bridges failed.

b. Whose Umwelt and How Big?

The original design was engineered by the project's structural engineer, but the shop drawing was produced by a steel fabricator, and it was not checked by the project structural engineer. You might expect a quick clear answer to, "Why not?" if you were from outside the industry. People in the profession know the answer doesn't come so quickly, and that is why the question applies, "Do we know what we are doing?"

It is the practice that steel fabricators detail connections. I think that is because people believe it saves money. It is the practice that architects and engineers check fabricators' shop drawings for general compliance with the design concept, and that check is not always done by the original project structural engineer. I think that is because people believe it saves money.

I also think it means that, in the rush and complexity of a design and construction project, there is a somebody who may not focus on the exact information needing attention in the precise way that will help. Our Umwelten all have different coverages. There are gaps in between, and the Golem hides in them. The size of our self-world denies our consciousness access to the signs that need reading, we miss the color of the ice, the wind confuses us, we lose our way, and the world comes down all around us. It would kill an Inuit hunter, and it kills us.

Page 36: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 36

Engineers and their advisors know that design changes are a major source of uncertainty. It is a place where one party's intentions acting on a design ordered by another's intentions may well block the objectives of both. Two heads are not better than one here. The engineer, fabricator, and contractor are not in the same DC-3 design hanger, so none has ready access to the others' consciousness. Their intentions order information differently, as they are not agreed on settled goals. The engineer, here, intended a hanger with simple, but specific load bearing characteristics, hence a one rod/one load system.

The contractor may have perceived that stringing a long rod through two bridges was cumbersome. The fabricator and contractor may both have wondered how to thread a nut at the upper bridge connection, as you can't push a nut over the unthreaded part of the rod to its middle. The shop drawing intended to answer those issues by a two rod system. Right there in the gap was a two load Connection "A", which entered no party's consciousness.

Had the information been ordered by consistent intentions acting on settled goals, construction could have been accomplished simply. Safe suspensions would have sustained life. All the two rod objectives could have been answered if the two rods had been joined together by a commonly available sleeve nut rod connector at or near Connection "A" to preserve the specific one rod/one load result intended by the engineer.

c. You Will be Blind if You Look Where the Golem is Not

You do not see the Golem unless you are looking for him-right now! When the Golem is outside your Umwelt, it is outside the world you know. Feel the limits here; rail bitterly against them; but the mischief is your own. Planetary leaders blind themselves by failing to anticipate the need to expand their self-worlds or by their immersion in an irrelevant field within them.

All of our empathy begs for a second chance: (1) to express at the outset that design changes order information by different intentions along goals that are not uniformly settled, (2) that changes will mask neglected risks, (3) therefore, to install a control that all proposed changes shall be personally reviewed, rejected, or approved by the original engineer in whose consciousness we expect to find the intentions that will order information in time to give us the freedom to act.

We can learn to read ice, and, if we expand our self-worlds to include reading ice when only the clues there will save us, we stand a chance at getting back home with some reliability.

Do We Bias Ourselves against the Future?

We are more likely than not to scorn people who call out our errors. They have "20-20 hindsight," and it is easy for all to see what is apparent afterwards. Because we are all technically blind in the future, we exist in cramped empathy with people who are smacked hard by it. We are all in the same boat, I suppose, but should we be content to book passage through time with those who dare without proper preparation to affect reality for centuries?

Page 37: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 37

a. Markers in the Consciousness

We are all time travelers in both directions. The past is our reservoir of experience; the future is our uncertain dominion. We travel a brief span of reality measured by the time we spend in it. No fare is collected, and no itinerary is posted.

But you are not content with that. No reason for architecture or engineering exists, unless reality is changed by it. You are a traveler in reality bent on changing it. You are not alone. Before you, travelers bent on discovery of new lands made the legends we studied and admired in school. Who did not aspire to an explorer's life?

Geographers know that exploration begins before the ship weighs anchor, and explorers never quite escape the exotic territory of the human mind. Exploration, teaches geographer J. Wreford Watson, is a process beginning in the imagination with preconceived ideas about the character and content of the lands to be found and explored. Explorers place imaginary markers on the land before one foot is set on it, and, once there, interpret what is seen to verify the markers sent conveniently ahead.

What we observe in new lands, Watson teaches, is neither new nor correct. Rather, what we see is a distortion "compounded of what men hope to find, what they look to find, how they set about finding, how findings are fitted into their existing framework of thought, and how those findings are then expressed."

The bodies of more than a few explorers are buried in a reality more strange and harsh than the one imagined or reported by the first to arrive.

b. Claiming by Markers

We carry our reservoirs of experience forward by placing markers ahead of our actions. Design professionals look ahead to the next project, and they imprint the expected reality of it with markers from their experience. I think explorers place markers for the want of courage to meet the unfamiliar with their lives. I think it is merely expedient for us at home. It is our way of fitting new realities into the garrison of the familiar. Whether the familiar will imprison our creativity or marshal a new and finer order depends on the preparation of our consciousness in that exact instance.

If we send our imaginary markers, as Watson would warn us, compounded by what we hope to find, what we have decided to find, what within our existing framework of thought we are prepared to find, then, we will not claim new territory.

There is no conclusive trepidation in that. The essential and cheerful point is that we can distinguish reality from our imaginary markers. The garrison of our experience will not acquit our comfortable conjecture if the consciousness is prepared to discover and reject all counterfeit reality.

PUTTING THE PARADlGM IN MOTION: LEARNING TO SEE

What exceptional powers have the complex and wonderfully adaptable beings that we are. But how timid and awkward do we approach our potential. We are hunters who turn prey at our own hands. We build our own versions of a human consciousness by inventing our self,

Page 38: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 38

our self-worlds. To our great dismay, we do not always build wisely, and our maintenance can be shabby at best.

The Self Estate

If I have occasionally made the self sound troublesome, dispatch that notion. Thomas Edison said he grew tired of hearing people run down the future, as he intended to spend the rest of his life in it. Your time in the future is in the self and the consciousness you construct. You have need of a strong and orderly estate.

The self has been much theorized, amplified, anesthetized, and penalized in philosophy, science, and society. We humans seem at times astonished at our capabilities and, alternatively, fearful of our potential. You may study the books yourself and draw your own conclusions about which ideology's sound you favor.

We bark for no dogma here. We look instead for what will help us increase our freedom to act. For it is the freedom to act that powers the quality paradigm.

To command that freedom, we need to know what labor our consciousness performs, collect the sweat of it on our faces, and feel its exertions as experience. We need to conserve our values and tie goals and controls in harmony with them. We need to chart the limits of our Umwelt, feel the weight of information and give good measure in it, and know the dysfunctions that inhibit our performance. Plainly, we will never complete the task of mastering all of that, but we can open our consciousness to secret possibilities and make at least a few breathtaking leaps through reality.

a. Splendorous Estates

Ours is first among all species for the time and energy spent in preparation, and our aspirations have propelled a pace of change that may predestine us all to a constant state of refitting. That would be a fine destiny, indeed, one well suited for planetary leaders. It is entirely suitable for any person bent on changing the planetary reality, even by adjusting a corner of it on Main Street.

The point here acknowledges but moves across your constant technical preparation: The point lands in your private estate. There, you are made guardian of your values and goals, and you determine the paths you will follow; there, you install controls on your progress along settled paths; there, you experiment with immortality, and you plan to change reality, perhaps forever. Do you marvel at the splendorous potential of your private estate? Look at the rooms, the library shelves, secret passages, doors to cryptic codes unbroken, and the plans, dreams, the flying machines! No one else has an estate quite like it! It is no wonder Strauss has appeared suddenly to conduct Also Sprach Zarathustra just for you!

Tell us Guardian: What will you build from all that wonderment in our shared reality? Concentrate! Strauss is on tiptoes; poised, baton waved high; the music soars here! This is your moment: Yours is the next breathtaking leap into reality leaving changes in stone, steel, and glass, perhaps forever! The doors to your estate fly open; the light from a billion possibilities is brilliant, but the music fails, it fades, and a critic looks back on you from your own shoulder. You can't stop it. Reality is sobering; there are doubts in the approach to it, and there ought rightfully to be. Reality is, finally, a splendorous place only for the people who are first prepared to leave their private estates and earn a place in it.

Page 39: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 39

b. The Concern of Self

There is no single way of preparation, but none will escape the need to travel paths in that direction. I offer you a recommendation from my time spent visiting disappointed estates and on the grounds outside fitting back the pieces of broken realities.

The classical Greeks postulated a self that held but the promise of attaining credentials fit to affect reality. The promise would be unfulfilled unless the individual undertook the obligation to prepare and care for the self that it would be made and remain qualified to venture into reality. The guiding, early Greek principle was, "Take care of yourself." Scholars teach that meant that you were to undertake the care of yourself as a central obligation. The philosophy obligated the individual to take guardianship over the self estate.

To accomplish your care, you were necessarily required to see yourself as a figure of some central importance. Over time, that Greek principle got into the same kind of trouble the Sun had at the center of the solar system, and both lost ground in the fierce competition for the heart of mankind's proper concern.

"Take care of yourself" and its corollary, "The concern of self," gave ground to the more accommodating, gnothi sauton, or "Know yourself." Scholars teach that "Know yourself" had a different point: It was ecclesiastical advice of the time meaning, "Do not presume yourself to be a god."

Good advice for this time of day, as well. But, as the Sun was restored to a working position in the solar system, perhaps we may find useful employment in our time for the Greek concern with the self.

Askesis Back to the Future

The philosophy has technique. Askesis is a discipline to prepare the novice human being for reality. The techniques of askesis train the self. In the Greek, the objective pursued is paraskeuazo, "to get prepared." While adrift in a neophyte consciousness, you know neither yourself, nor do you know reality. You are unprepared to take your place in reality. But you become prepared by the acquisition and assimilation of the truth about yourself and about reality. All of that is knowledge necessary for you to take care of yourself.

Do not draw the conclusion that the concern with self can be called self-centered as we use the term today. Askesis sought to train the ethical self. Its training led to the assimilation of truth, and its practice tested one's preparation for doing what should be done when confronted with reality. The first note of the paradigm, the value note, is everywhere evident in the askesis. Greece was to get a healthy Republic out of the healthful self.

a. The First Discipline Melete: The Premeditatio Mallorum

The askesis is a philosophical tradition in two parts. The first part is melete in the Greek or meditatio in the Latin.

Melete is the progressive consideration of the self performed by contemplating anticipated situations, then engaging in a dialogue of useful responses, arguments, and courses of action.

Page 40: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 40

The meditatio is a "what if-then this" inquest of the self. Accomplished alone, it is a meditation; with another, it is Socratic exchange:

I foresee something. What will you do?

I will do this. Why?

Because it will do some good. But, is that enough?

I believe it is. But, I foresee something else.

And on it goes.

The meditatio is especially commended to judgment workers, whose thoughts (unlike ball bearings) cannot be readily measured and whose flaws, if not detected by themselves, may be unperceived by others. Judgment workers are the guardians of their own values and goals, but they are fiduciaries of precious parts of the reality we all share. Before our shared reality is changed forever by what is put there, people who bend it by their intentions should test the consequences first on themselves. You do not need more examples of that.

Those who pursue the classic, premeditatio mallorum, pursue training of the ethical self. It is preparation that increases the freedom to act by forcing upon us images of the future misfortunes we will wish to avoid. It teaches us to see the Golem.

The discipline develops three eidetic (cinegraphic) reductions of future misfortune:

FIRST: Imagine your future, not as you believe it will be, but imagine the worst that can happen in your future. Do not temper the images with what "might be" or "how it happened once before," but make the worst case imaginable for yourself.

SECOND: Imagine not that this will happen in the future, but that it is in the process of happening now. Imagine not that the shop drawing will change the design, but that the shop drawing is being drawn, that it has been drawn, that the steel is being fabricated, that the steel is being installed, that the building is occupied, that the connection has failed.

THIRD: Engage in a dialogue to prepare your responses and actions, until you have reduced the cinegraph to an ethical resolution. Do not conclude the dialogue until you are prepared to do what should be done when faced with the reality of what you have imagined.

Herein are tested the responses: I am not responsible for changes; someone else is responsible; they are supposed to tell me about any changes first and send me calculations; they are engineers, too; I wasn't hired for steel detailing; my review is only for general, not specific compliance; I look at every steel shop drawing they send me; there is only so much time and money for shop drawing reviews; the building department approved the building; I did what others would have done.

Common precautions and plans may not allow an eidetic reduction of your future misfortune that can conserve your values and meet your goals. Perhaps, what you are about is not common, or, if common, it holds peril for many lives. Then, try repeated reductions with new precautions until a satisfactory resolution is achieved. This can be unsettling work, but

Page 41: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 41

harness the agitation you feel. It is an antagonist you call voluntarily to your cause. Feel the weight of that critic on your shoulder.

People unwilling to deliberately disquiet themselves will refuse the proportions of a future misfortune or will dismiss it for an uncertainty of occurrence, but that is not an eidetic reduction. That is denial, and denial is not preparation.

When you are prepared to affect reality, you will see the Golem, and you will be prepared to do what must be done. Your preparation will reveal the Golem wrapped in the twisted metal moment you cease conservation of your values.

What you are permitted to access by the discipline is an unacceptable reality at no loss to life or property. It is an opportunity to check your values, settle your goals, install controls, and prepare yourself (including the potent set) with the intentions that will order and encode information and instruct actions thereafter to reduce future misfortune.

As the result of living the premeditatio mallorum, a structural engineer might decide to review a steel shop drawing for each and every load-bearing connection in a suspension bridge, which he will check off on the structural sheets one by one.

A design professional could discover that stacked, suspended, pedestrian bridge designs carry significant missing information and make the decision to increase the probability of the information furnished achieving the intended outcome. Drawings and specifications might encode copious redundant information to limit the possible outcomes. That might be done even for the absurdly simple design, because the consequences of a failure are too great to risk. Perhaps, the eidetic reduction of future misfortune can be handled in no other way.

With this preparation, ice can be read and understood, and hunters will return to their families.

b. The Second Discipline Gymnasia: To Train

Melete is work in the imagination, and it prepares one to affect reality by fortifying the consciousness with information about intentions, anticipations, and future misfortunes. Gymnasia is work at the other end of the spectrum, where there is training in real situations.

Training, as I believe you know, is not what is done in school. Students go to college to study for the same reason people go to banks for robbery. Money is kept in banks, and college is where theory is kept.

If done at all, training is begun and continued in practicing firms actively affecting reality. New practitioners, if ready to work, are not prepared to affect reality. Gymnasia contemplates that training will be in real situations, even if they do not directly affect reality. That is one function of reviews by seniors and making experienced minds available to curious beginners. It is the business of finding and sustaining curiosity. Gymnasia teaches the premeditatio

mallorum to give the novice a view, even if pessimistic, of the reality as it might be for a slow or careless apprentice.

Take note that people who do not train at the premeditatio "if this-then what?" are frequently seen playing at the less effective self-care exercise, "If only we had-well then."

Page 42: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 42

THE CASE FOR OPTIMISM

There are hard lessons for us and much work undone here, but even as we sit among the broken dreams and scattered flying machines, the case for optimism is at hand.

Which Path is Chosen: Whose Fields are Crossed?

Much stock is put in the individual here. Have you asked yourself what was done to teamwork along the way? Teamwork was not buried here. I think it was improved. People who are individually prepared for quality results are both candidates for leadership and worthy contributors to a team aiming in that direction. Prepared individuals can settle on team goals and the paths to them. They are more confident of their attainment, because they are individually and intelligently committed to the goals and paths traveled.

The ideology of the submerged individual slogging along to forced songs fades now. The Internationale does not muffle the cries of people who will not bear repression. The fierce focus on the competition between ideologies is fading. You can read Winter and worry on the faces of the remaining ideologues. The immediate season is entrepreneurial Spring. And you can foresee the doors to profound optimism opening there on more hinges of consciousness free to act than ever before. The good of that is the better work our species does under those conditions.

Abraham Lincoln knew that when he mused in a diary in 1861 on the success of the American experiment with the individual's freedom:

Without the Constitution and the Union, we could not have attained the result; but even these, are not the primary cause of our great prosperity. There is something back of these, entwining itself more closely about the human heart. That something, is the principle of "Liberty to all"-the principle that clears the path for all-gives hope to all-and, by consequence, enterprise, and industry to all.

Basler, ed., Collected Works of Abraham Lincoln, IV, 168-69. The paradigm led to Greek philosophy and now to Mr. Lincoln by tracing the lineage of its subject. You have been our subject. When the path chosen starts there, its course passes through a philosophy that would put a piece of the Republic's fate in your estate and make you its guardian. We stand on good ground with that, because quality comes from there.

Judgment Workers on the Path to Enterprise

The attention to individuals and their achievements has given us insights into how some people get into step with the choreography of quality.

Psychologists and researchers in this relatively new field have offered observations about the highest achievers:

1. They are able to transcend previous comfort zones, thereby opening themselves to new heights of achievement.

2. They are guided by compelling goals they set themselves. 3. They solve problems rather than place blame. 4. They confidently take risks after preparing their psyche for the worst consequences,

and they are able to mentally rehearse anticipated actions or events beforehand. 5. They routinely issue "Catastrophic Expectations Reports" to themselves, against

which they refine and rehearse countermeasures.

Page 43: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 43

6. They are able to imagine themselves exceeding previous achievements, and they issue themselves "Blue Sky Reports."

7. They do what they do for the art and beauty of accomplishment.

There are other attributes of high achievers, I am sure, but this list is a fair mirror to reflect in. As you reflect, return to the askesis to test the durability of its traditions, and consider (with a nod to Mr. Lincoln's own reading list) whether the Greeks of antiquity were not correct after all to care for the self that the Republic might enjoy great health .

Carry What You Think Will Last

I hope you read more philosophy and art in the paradigm than procedure, for, unless you do, no good may come of it.

Managers, who, by this time, may wonder how to get their judgment workers to rendevous with quality, are about to walk away from the philosophy of it altogether. The paradigm has space for all, but it has no separate rooms. Everyone must look to the care of the self that the firm may prosper. No one has permission to go blindly and unprepared into the reality of the next project. Not from the leading principal to the neophyte is anyone exempt from preparation to figure the probability of missing information and give good weight in it, to test false markers, or to lay first eye on the Golem.

The philosophy of the paradigm is in the pursuit of quality as an ethical odyssey. Those who achieve and sustain high quality are in that chase.

The art of it is in the happiness men and women attain from the work of architecture and engineering when they are prepared to affect reality in a way that has timeless beauty.

Paper Tigers

What place is there for the procedures, the checklists, and the standards that are the common mainstay of quality management?

Yes, paper and recordings on it are required. But the purpose of all that paper is not to drive workers to quality goals. Rather, its purpose is to record the experience the firm has had in successfully affecting reality, and to communicate the wisdom gained. It is history; therefore, it can be guidance, and I like that. It is yesterday's news, and that worries me.

That paper tools are made is not assurance that any user is alive to the hazards. Airlines believe in doing things by the book; pilots believe in doing things by the book; both mostly do. Yet wings are still ice coated and flaps set in wrong positions, and both occasionally stay that way right through the crash.

I said earlier that controls are deliberate potentials for disquieting information, but what I know about checklists is that people frequently bounce through them seeking confirmation (or more likely, just documentation) that they have done right. None of that helps us return home from the hunt safely.

The Inuit hunter learns the clues in ice for the love of life in him. Clues that have immense potential are not missed; they are immediate, intimate, and they solve the mystery of survival. You and I lose nearly every bit of the good in the paper tools we read. We are not disquieted, and we do not allow controls to make us deliberately so.

Page 44: Quality Course 2

Haery Sihombing @ IP

Quality Paradigm-HHIP 44

The clues in paper tools will have immense potential only for people intent on their preparation for changing reality. People in that chase will compose their histories in comprehensive and precise checklists, procedures, and standards, and they will send them ahead armed with their tested insights of the perils in new projects. They will offer them for wisdom that is immediate, intimate and for their potential to solve mysteries in stone, steel, and glass. But they will not cast confident eyes over their paper tools. The quality is not in them.

LAST WORDS

We went looking for the roots of our failures and for any help there was to comprehend passages back from deep disappointment. I think we hoped to emerge with clues to our happiness.

It turned out there were all around us everywhere waiting, clues. _________________

6. BIBLIOGRAPHICAL NOTES

1. To take your study into information and language theory, read much more about it in Jeremy Cambell's book, Grammatical Man, Information, Entropy, Language and Life,

Simon & Schuster, Inc., 1982. Read, as well, in Chaos by James Gleick, Viking Penguin, Inc., 1987.

2. The structure of the consciousness is developed by Mihaly Csikszentmihalyi in Flow:

The Psychology of Optimal Experience, Harper & Row, 1990. Flow is written for a wide audience by a serious researcher and scholar. It cannot be confused with a ten-minute book on anything.

3. For a discourse on evolution of species, you cannot do better than Richard Dawkins' book, The Blind Watchmaker, W.W. Norton & Company, 1986. Mr. Dawkins offers for ten dollars to sell you his Macintosh "Biomorph" program, which will have you evolving on-screen life forms in minutes. See the coupon in the back of his book. As far as I know, it is the only way to get the program.

4. Peter F. Drucker is frequently your best source and often the only one you will need on management theory, including goals, controls, and the behavior of workers in an information era. Management Tasks, Responsibilities, Practices, Harper & Row, 1973, is comprehensive.

5. You can participate in a varied seminar on the self through the collection of papers in Technologies of the Self: A Seminar With Michel Foucault, University of Massachusetts Press, 1988. The editors are Luther H. Martin, Huck Gutman, and Patrick H. Hutton.

6. Two books adding to the study here through unique perspectives on life and the varied experience of it are: Departures by Paul Zweig, Harper Row, 1986, and Arctic

Dreams: Imagination and Desire in a Northern Landscape, Charles Scribner's Sons, 1986, by Barry Lopez, who introduced us to the Inuit hunter, Jacob Von Uexkull, and J. Wreford Watson.

Page 45: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

STATISTICAL PROCESS CONTROL

STATISTICAL PROCESS CONTROL

INTRODUCTION

History The fundamentals of Statistical Process Control (though that was not what it was called at the time) and the associated tool of the Control Chart were developed by Dr Walter A Shewhart of Bell Laboratories in the mid-1920’s and was introduced in 1924. His reasoning and approach were practical, sensible and positive. Shewhart developed a statistical chart (now called a control chart) for control of product variables, along with the procedures and mathematical proof that make the use of these scientifically viable. In order to be so, he deliberately avoided overdoing mathematical detail. In later years, significant mathematical attributes were assigned to Shewharts thinking with the result that this work became better known than the pioneering application that Shewhart had worked up.

Statistical process control that was pioneered by Walter A. Shewhart and taken up by W. Edwards Deming with significant effect by the Americans during World War II to improve industrial production. Deming was also instrumental in introducing SPC methods to Japanese industry after that war. Dr. Shewhart created the basis for the control chart and the concept of a state of statistical control by carefully designed experiments. While Dr. Shewhart drew from pure mathematical statistical theories, he understood data from physical processes never produce a "normal distribution curve" (a Gaussian distribution, also commonly referred to as a "bell curve"). He discovered that observed variation in manufacturing data did not always behave the same way as data in nature (Brownian motion of particles). Dr. Shewhart concluded that while every process displays variation, some processes display controlled variation that is natural to the process, while others display uncontrolled variation that is not present in the process causal system at all times.

[1]

ScopeThe crucial difference between Shewhart’s work and the inappropriately-perceived purpose of SPC that emerged, that typically involved mathematical distortion and tampering, is that his developments were in context, and with the purpose, of process improvement, as opposed to mere process monitoring. I.e. they could be described as helping to get the process into that “satisfactory state” which one might then be content to monitor. Note, however, that a true adherent to Deming’s principles would probably never reach that situation, following instead the philosophy and aim of continuous improvement.

Statistical process control (SPC) is mostly technical in nature – it is actually just the technical arm of the author’s SPC quality management system. It concentrates of finding process variation; correcting the variation depends on the creativity and ingenuity of the people involved.

SPC (statistical process control) began as a means of controlling production in manufacturing plant. It soon became obvious that SPC principles could be extended into other areas, such as non-manufacturing career other than production in a manufacturing firm, and all areas of service type industry, management, health care, education, politics, the families and area life itself.

The system that has evolved from SPC is in two parts : technical and humanistic. SPC (in term of TQM) is a system of management.: One that concentrates on quality rather than on productivity or accounting (as used to be the case in the past, and still is in many firms). Actually, quality is included in a production and/or an accounting management system, and production and accounting are included in a quality management system. The problem is in the way we think about it and the way we approach it.

In production or accounting systems, quality tends to become only appended function, it tends to be somewhat forgotten. The problems associate with getting a product out of the door tend to become more important than the problems associated with getting a ‘”good” product out (making “quality” products).

In summary, the SPC (in term of TQM) quality management system is as system where people work to accomplish certain goals. The goals are:

Quality (making a ‘good’ product) Productivity (getting the product ‘out of door’) Accounting (making the product at least cost) People working together harmoniously

Page 46: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

STATISTICAL PROCESS CONTROL

In general, they are divided into 2 aspects, are:

Technical called SPC Humanistic called TQM

DEFINITION

Statistic

a) Is the science that deals with the collection, classification, analysis, and making of inferences from data or information. Statistic is subdivided into 2 categories, are:

Descriptive Statistic (describes the characteristics of a product or process using information collected on it).

Inferential Statistic (draws conclusions on unknown process parameters based on information contained).

b) Is a measure of characteristic of a sample of the universe. Ideally, statistics should be resemble and closely approximate the universe parameters they represent. In statistics, symbols are used to represent desirable characteristics, mostly for ease calculation.

Process (lat. processus - movement)

a) is the transformation of a set of inputs, which can include materials, actions, methods and operations, into desired outputs, in the form of products, information, services or – generally-results.

b) is a systematic series of actions directed to achievement of goal.

c) is any collection of activities that are intended to achieve some result, typically to create added value for a customer.

Industrial process is ‘any process that comes in physical contact with the hardware or software’ that will be delivered to an external customer, up to the point the product is packaged. (e.g. manufacturing computers, food preparation for mass customer consumption, oil refinement, changing iron core into steel).

Business or administrative process include all service processes and processes that support industrial process (e.g. billing, payroll, and engineering changes).

d) is a set of usually sequential value-added tasks that use organizational resources to produce a product or service. A process can be unique for a single department or can cross many department within an organization. A process is usually repetitive in nature and is heeded to make a product or achieve a result.

e) is a naturally occurring or designed sequence of changes of properties/attributes of a system/object. More precisely, and from the most general systemic perspective, every process is representable as a particular trajectory (or part thereof) in a system's phase space.

Process (lat. processus - movement) is a naturally occurring or designed sequence of changes of properties/attributes of a system/object. More precisely, and from the most general systemic perspective, every process is representable as a particular trajectory (or part thereof) in a system's phase space.

Every measurement is a process. The process of measurement is the fundamental concept in physics, and, in practice, in every field of science and engineering.

Identification of a process is also a subjective task, because the whole universe demonstrates one continuous universal "process", and every arbitrarily selected human behaviour can be conceptualized as a process. This aspect of the process recognition is closely dependent on human cognitive factors. According to the observations included in the systemic TOGA meta-theory, the concepts system, process, function and goal are closely and formally connected, and, in parallel, every process has the system property i.e., can be seen as an abstract dynamic system/object, and arbitrarily divided on network of processes. This division depends of the character of the changes, and on socio-cognitive factors, such as their perception, tools and the goal of the observer.

Page 47: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

STATISTICAL PROCESS CONTROL

For the above goal-oriented reason, from the industrial managerial point of view, the following inputs can be initially applied in an engineering process specification: people, machines and tools, materials, energy, information, professional knowledge, capital, time and space

Fig. 1 Process

SPC

a) Statistical process control (SPC) is a method for achieving quality control in manufacturing processes. It is a set of methods using statistical tools such as mean, variance and others, to detect whether the process observed is under control.

Fig. 2 Sample of The Process Path

b) Statistical Process Control (SPC) is a method of monitoring, controlling and, ideally, improving a process through statistical analysis. Its four basic steps include measuring the process, eliminating variances in the process to make it consistent, monitoring the process, and improving the process to its best target value

c) Statistical process control is the application of statistical methods to identify and control the special cause of variation in a process. Statistical Process Control (SPC) is the equivalent of a histogram plotted on it's side over time. Every new point is statistically compared with previous points as well as with the distribution as a whole in order to assess likely considerations of process control (i.e. control, shifts, and trends). Forms with zones and rules are created and used to simplify plotting,

Page 48: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

STATISTICAL PROCESS CONTROL

monitoring, and decision making at the operator level. SPC separates special cause from common cause variation in a process at the confidence level built into the rules being followed (typically 99.73% or 3 sigma)

d) SPC is about control, capability and improvement, but only if used correctly and in a working environment which is conducive the pursuit of continuous quality improvement, with the full involvement of every company employee.

SPC is generally accepted to mean control (management) of the process through the use of statistic or statistical methods.

Taking the guesswork out of quality control, Statistical Process Control (SPC) is a scientific, data-driven methodology for quality analysis and improvement. Statistical Process Control (SPC) is an industry-standard methodology for measuring and controlling quality during the manufacturing process. Attribute data (measurements) is collected from products as they are being produced. This data is then plotted on a graph with pre-determined control limits. Control limits are determined by the capability of the process, whereas specification limits are determined by the customer’s needs.

Data that falls within the control limits indicates that everything is operating as expected. Any variation within the control limits is likely due to a common cause—the natural variation that is expected as part of the process. If data falls outside of the control limits, this indicates that an assignable cause is likely the source of the product variation, and something within the process should be changed to fix the issue before defects occur.

With real-time SPC you can: Dramatically reduce variability and scrap Scientifically improve productivity Reduce costs Uncover hidden process personalities Instant reaction to process changes Make real-time decisions on the shop floor

To quantify the return on your SPC investment, start by identifying the main areas of waste and inefficiency at your facility. Common areas of waste include scrap, rework, over inspection, inefficient data collection, incapable machines and/or processes, paper-based quality systems and inefficient lines. You can start to quantify the value of an SPC solution by asking the following questions:

Are your quality costs really known? Can current data be used to improve your processes, or is it just data for the sake of data? Are the right kinds of data being collected in the right areas? Are decisions being made based on true data? Can you easily determine the cause of quality issues? Do you know when to perform preventative maintenance on machines? Can you accurate predict yields and output results?

There is a lot of confusion about SPC. Walter Shewhart, Don Wheeler, and the BDA booklets "Why SPC?" and "How SPC?" say one thing. Almost all other books on SPC say something else. The reason for the disagreement is not that the other books are completely wrong in everything they say. (They are wrong in some things). The main difference is that the aim of SPC as intended by Walter Shewhart, and as described by most other writers, are completely different.

The two aims are:

Adjustment : To detect quickly when a process needs readjustment or repair. Improvement : To find the most urgently needed improvement in the system itself.

Adjustment is all most books cover. It has some use, because it prevents things getting worse. The point of making the adjustment is to try to keep the product "within specification". This makes the statistical methods quite complicated.

To be sure that most of the individual results are within specification, when we only have measured a few samples, we have to know that the process is stable, and that the individual measurements are

Page 49: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

STATISTICAL PROCESS CONTROL

"normally distributed". Otherwise we can not predict the proportion of the product within specification. Unfortunately, as Henry Neave of this group has shown by computer simulation, even if these assumptions are true, the predictions this method makes can be wildly wrong. What is worse, we know in practice that the assumptions never are true.

Improvement is what Dr Shewhart and Dr Deming are talking about, but hardly any books mention: or if they do, they are more concerned with the first problem, and so suggest rather ineffective ways of finding improvements. Adjustment is needed too, but it takes second place, instead of being the sole aim.

The reason why the approach has to be different is that some "signals" that you can pick up from a control chart tell you little about the nature of the underlying reason for going out of control. For example, a slow drift in the mean can rarely point straight to a cause of change, whereas an isolated point quite unlike the points on either side of it usually tells you all you need to know. At least, it does if the process operators themselves are keeping the chart: they will usually know just what happened at that point. By comparison, a slow drift may result from something that started to go wrong long before.

Naturally, if all you are going to do is to alter the controls to bring the mean back into line, you want to detect a slow drift or change as soon as possible, and put it right. This is why many books suggest such a wide range of "out of control" signals, such as runs above the mean, or runs in the same direction.

On the other hand, if you want to trace underlying causes, and do something permanent about them, these signals are usually nothing but a nuisance. The process gets readjusted before you can trace a cause. So in the Deming-Shewhart approach, the only signal worth much is the simple 3SD above or below the mean. The distribution, normal or otherwise rarely matters. And of course, we do NOT have to start with a process that is "under control".

The aim is first of all to find out if the process is stable. If it is stable, that is, "under statistical control", the aim is to set priorities for investigation. The investigations are to find ways of changing the process, by permanently removing "special causes" of variation.

If, on the other hand the process is already stable, we can use the control chart to demonstrate the effect of experimental changes. Any change is likely to make the process unstable for a short time, so the control chart is needed to tell us when the new equilibrium has been reached.

Instead of emphasizing complicated rules for detecting drift or a change of mean, what is needed is great care to see that the information about factors which might affect the process, and knowledge of changes, is immediately available to someone who can see the connections, and can get things done. In this approach control charts on INPUTS to the process, such as raw materials, temperatures, pressures, and so on, are as important, or more important, than control charts on the final product. For adjustment, only the final product matters.

Obviously improvement is better in the long run, from all points of view. If simply adjusting the process is enough to meet specifications, improvement will meet them many times over. And the general effects on the system which result from improvement will have good effects that spread far and wide.

The statistical methods used in improvement are also much easier to understand and use. The drawback, in many companies, is that short-term thinking rules, and no-one has the power to change the system.

STATISTICAL PROCESS CONTROL

What do “in control” and “out of control” mean?

Suppose that we are recording, regularly over time, some measurements from a process. The measurements might be lengths of steel rods after a cutting operation, or the lengths of time to service some machine, or your weight as measured on the bathroom scales each morning, or the percentage of defective (or non-conforming) items in batches from a supplier, or measurements of Intelligence Quotient, or times between sending out invoices and receiving the payment etc.

A series of line graphs or histograms can be drawn to represent the data as a statistical distribution. It is a picture of the behaviour of the variation in the measurement that is being recorded. If a process is

Page 50: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

STATISTICAL PROCESS CONTROL

deemed as “stable” then the concept is that it is in statistical control. The point is that, if an outside influence impacts upon the process, (e.g., a machine setting is altered or you go on a diet etc.) then, in effect, the data are of course no longer all coming from the same source. It therefore follows that no single distribution could possibly serve to represent them. If the distribution changes unpredictably over time, then the process is said to be out of control. As a scientist, Shewhart knew that there is always variation in anything that can be measured. The variation may be large, or it may be imperceptibly small, or it may be between these two extremes; but it is always there.

What inspired Shewhart’s development of the statistical control of processes was his observation that the variability which he saw in manufacturing processes often differed in behaviour from that which he saw in so-called “natural” processes – by which he seems to have meant such phenomena as molecular motions.

Wheeler and Chambers combine and summarise these two important aspects as follows:

"While every process displays variation, some processes display controlled variation, while others

display uncontrolled variation."

In particular, Shewhart often found controlled (stable variation in natural processes and uncontrolled (unstable variation in manufacturing processes. The difference is clear. In the former case, we know what to expect in terms of variability; in the latter we do not. We may predict the future, with some chance of success, in the former case; we cannot do so in the latter.

Why is "in control" and "out of control" important?

Shewhart gave us a technical tool to help identify the two types of variation: the control chart - (see Control Charts as the annex to this topic).

What is important is the understanding of why correct identification of the two types of variation is so vital. There are at least three prime reasons.

First, when there are irregular large deviations in output because of unexplained special causes, it is impossible to evaluate the effects of changes in design, training, purchasing policy etc. which might be made to the system by management. The capability of a process is unknown, whilst the process is out of statistical control.

Second, when special causes have been eliminated, so that only common causes remain, improvement then has to depend upon management action. For such variation is due to the way that the processes and systems have been designed and built – and only management has authority and responsibility to work on systems and processes. As Myron Tribus, Director of the American Quality and Productivity Institute, has often said:

“The people work in a system.

The job of the manager is

To work on the system

To improve it, continuously,

With their help.”

Finally, something of great importance, but which has to be unknown to managers who do not have this understanding of variation, is that by (in effect) misinterpreting either type of cause as the other, and acting accordingly, they not only fail to improve matters – they literally make things worse.

These implications, and consequently the whole concept of the statistical control of processes, had a profound and lasting impact on Dr Deming. Many aspects of his management philosophy emanate from considerations based on just these notions.

So why SPC?

The plain fact is that when a process is within statistical control, its output is indiscernible from random variation: the kind of variation which one gets from tossing coins, throwing dice, or shuffling cards. Whether or not the process is in control, the numbers will go up, the numbers will go down; indeed, occasionally we shall get a number that is the highest or the lowest for some time. Of course we shall: how could it be otherwise? The question is - do these individual occurrences mean anything important?

Page 51: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

STATISTICAL PROCESS CONTROL

When the process is out of control, the answer will sometimes be yes. When the process is in control, the answer is no.

So the main response to the question Why SPC? is therefore this: It guides us to the type of action that is appropriate for trying to improve the functioning of a process. Should we react to individual results from the process (which is only sensible, if such a result is signalled by a control chart as being due to a special cause) or should we instead be going for change to the process itself, guided by cumulated evidence from its output (which is only sensible if the process is in control)?

Process improvement needs to be carried out in three chronological phases:

Phase 1: Stabilisation of the process by the identification and elimination of special causes: Phase 2: Active improvement efforts on the process itself, i.e. tackling common causes; Phase 3: Monitoring the process to ensure the improvements are maintained, and incorporating

additional improvements as the opportunity arises.

Control charts have an important part to play in each of these three Phases. Points beyond control limits (plus other agreed signals) indicate when special causes should be searched for. The control chart is therefore the prime diagnostic tool in Phase 1. All sorts of statistical tools can aid Phase 2, including Pareto Analysis, Ishikawa Diagrams, flow-charts of various kinds, etc., and recalculated control limits will indicate what kind of success (particularly in terms of reduced variation) has been achieved. The control chart will also, as always, show when any further special causes should be attended to. Advocates of the British/European approach will consider themselves familiar with the use of the control chart in Phase 3. However, it is strongly recommended that they consider the use of a Japanese Control Chart (q.v.) in order to see how much more can be done even in this Phase than is normal practice in this part of the world

Classical Quality control was achieved by observing important properties of the finished product and accept/reject the finished product. As opposed to this technique, statistical process control uses statistical tools to observe the performance of the production line to predict significant deviations that may result in rejected products.

The underlying assumption in the SPC method is that any production process will produce products whose properties vary slightly from their designed values, even when the production line is running normally, and these variances can be analyzed statistically to control the process. For example, a breakfast cereal packaging line may be designed to fill each cereal box with 500 grams of product, but some boxes will have slightly more than 500 grams, and some will have slightly less, producing a distribution of net weights. If the production process itself changes (for example, the machines doing the manufacture begin to wear) this distribution can shift or spread out. For example, as its cams and pulleys wear out, the cereal filling machine may start putting more cereal into each box than it was designed to. If this change is allowed to continue unchecked, product may be produced that fall outside the tolerances of the manufacturer or consumer, causing product to be rejected.

By using statistical tools, the operator of the production line can discover that a significant change has been made to the production line, by wear and tear or other means, and correct the problem - or even stop production - before producing product outside specifications. An example of such a statistical tool would be the Shewhart control chart, and the operator in the aforementioned example plotting the net weight in the Shewhart chart.

1. "Why SPC?" British Deming Association SPC Press, Inc. 1992

III. IMPROVEMENT TOOLS ( 7 QC TOOLS) (see appendix)

1. Check Sheet 2. Pareto Diagram 3. Histogram 4. Flow Chart 5. Scatter Diagram 6. Cause-and-Effect (=Fishbone) Diagram 7. Run Chart (=Control Chart)

Page 52: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

STATISTICAL PROCESS CONTROL

h

hh

h

i

ii

fff

XfXfXf

n

Xf

X...

...

21

22111

IV. PROBABILITY

There are 5 main reason why sampling is usually preferred over 100% inspection as follow as:

The universe, or population, may be to large, or of such a nature that 100% sampling is impossible or too costly.

The measurement itself may be too costly in relation to the product worth.

The measurement may destroy the product.

The products or the measurement may be dangerous

The boredom and fatigue of 100% inspection may cause more errors, and thus be less reliable, than sampling (an unfortunate psychological factor involved in most 100% inspection). Through the years, quality control practicioner has have proved that this factor is very potent indeed; and this justify the use of sampling procedures over 100% inspection (in most cases).

However, 100% inspection should be limited to expensive items or extremely important characteristics where measurement boredom is minimized. In SPC and quality control, the most frequent of summary are measure of:

o Central tendency

Mean the means of a series of measurement is determined by adding the values together and then dividing this sum by the total number of values.

UNGROUP DATA

GROUPED DATA

= average

n = number of observed values X1, X2,…,Xn = observed value identified by the subscript 1, 2,…, n or general subscript i.

= symbol meaning “sum of”

n = sum of the frequency fi = frequency in a cell or frequency of an observed value xi = cell midpoint or observed value h = number of cells or number of observed values

Mode The mode is the most frequently occurring number in a group of values

Median Central tendency is the tendency to be the same. It is a central or midway value from which other value deviate in some set pattern. Tendency : o STRONG (the values or measurements will group closely about the central value) o WEAK (the values or measurements will not group closely about the central value) The median for continuous data :

GROUPED TECHNIOQUE

if

cfn

LMdm

m

m2

Md = Median Lm = Lower boundary of the cell with the median n = Total number of observations cfm = Cumulative frequency of all cells below lm fm = Frequency

o Measures of dispersion

Range The range is the difference between the highest value in a series of values or sample and the lowest value in that same series.

R = Xh - Xl

R = range Xh = highest observation in a series Xl = lowest observation in a series

X

n

XXXXX n

n

i

i

...21

1

Page 53: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

STATISTICAL PROCESS CONTROL

Standard Deviation

The standard deviation shows the dispersion of the data within the distribution

o Control Charts

VARIABLE

1

1

2

1

2

nn

XXnn

i

n

i

ii =

Symmetrical Positively Skewed Negatively Skewed

Average Median Mode

Mode Mode

Median

Average Average

Median

Page 54: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

STATISTICAL PROCESS CONTROL

The central lines for the and charts are obtained using the equations :

Trial control limits for the charts are established at from the central line, as show by the equations :

ATTRIBUTE

Where m represents the number of nonconforming items, b is the number of items in the sample and p is the proportion of nonconformity

Where is the mean proportion, k is the number of samples audited and is the kth proportion obtained.

Where c is the center line and k is the sigma level intended. The most commonly used sigma level is 3.

Page 55: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

STATISTICAL PROCESS CONTROL

Where represents the average defect

per sample, u is the total number of defects and n is the sample size.

In the real life, X seldom exactly equals µ due to the differences in the number of measurements used to calculate the two values.

What is called as a random variable.

o Formula for the universe standard decision is

o Formula for the sample standard deviation is

The reason for this to offset the normal bias of small sample size when the sample size become large, there us little difference between the formulas.

1. DISCRETE DISTRIBUTION

Deals with those randoms variable that can take on a finite or countably infinite number of values

1.1 HYPERGEOMETRIC DISTRIBUTION

The hypergeometric probability distribution occurs when the population is finite and the random sample is taken without replacement. This procedure calculates the hypergeometric probability distribution to evaluate hypothesis in relation to sampling without replacing in small populations The formula for the hypergeometric is constructed of there combinations (total combinations, nonconforming combinations, and conforming combinations) and is given by

1.2 BINOMIAL DISTRIBUTION

The binomial probability distribution is applicable to discrete probability problems that have an infinite number of items or that have a steady stream of items coming from a work center. The binomial is applied to problems that have attributes, such as conforming or nonconforming, success or failure, pass or fail, and heads or tails.

Page 56: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

STATISTICAL PROCESS CONTROL

Negative Binomial :Also used to study accidents, is a more general case than the Poison, it considers that the probability of getting accidents if accidents clusters differently in subgroups of the population. However, the theoretical properties of this distribution and the possible relationship to real events are not well known. Another version of the negative binomial, this one is used to do the marginal distribution of binomials. Often used to predict the termination of real time events. An example is the probability of terminating listening to a non-answering phone after n-rings.

More specifically, consider the following experimental process: 1. There are n trials. 2. Each trial results in a success or a failure. 3. The probability of a success, p, is constant from trial to trial. 4. The trials are independent.

is a binomial coefficient

The probability that a random variable X with binomial distribution B(n,p) is equal to the value k, where k= 0, 1,....,n , is given by, where

where

The binomial formula for a single term is

1.3 POISSON DISTRIBUTION

The distribution applicable to many situations involve observations per unit of time: for example, the count of cars arriving at a highway toll booth in 1-min intervals, the count of machine breakdowns in 1 day, and the count of shoppers entering a grocery store in 5-min intervals. The distribution is also applicable to situations involving observations per unit amount. The poisson is applicable when n is quite large and po is small. Calculates probabilities for samples which are very large in an even larger population. Is used to approximate the binomial distribution, try to compare it with the binomial! The distribution is more often used in a completely different way, for the analysis of how rare events, such as accidents, cumulate for a single individual. For example, you can use it to estimate your chances of getting one, two, three or more accidents in any one year considering that on average people get 'U' accidents per year.

where is the average number of occurrences in the specified interval. For the Poisson distribution,

2. CONTINUOUS DISTRIBUTION

May assume an infinite number of values over a finite or infinite range. The probability distribution of continuous random variable x is often called the probability density function f (x)

2.1 NORMAL DISTRIBUTION

The normal curve is a continuous probability distribution. Solutions to probability problems that have involve continuous data can be solved using the normal probability distribution. The so-called "standard

normal distribution" is given by taking and in a general normal distribution. An arbitrary

Page 57: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

STATISTICAL PROCESS CONTROL

normal distribution can be converted to a standard normal distribution by changing variables to

, so , yielding

2.2 EXPONENTIAL DISTRIBUTION

Given a Poisson distribution with rate of change , the distribution of waiting times between successive

changes (with ) is

(1)

(2)

(3)

And the probability distribution function is

(4)

SKEWNESS When distribution lacks symmetry, it is considered skewed

KURTOSIS The peakedness of the data

Page 58: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Flow Chart-1

FLOWCHART

INTRODUCTION In the systematic planning or examination of any process, whether it is a clerical, manufacturing, or managerial activity, it is necessary to record the series of events and activities, stages and decisions in a form which can be easily understood and communicated to all.

If the improvements are to be made, the facts that relating to the existing method must be recorded first. The statements, defining the process should be lead to its understanding and will provide the basis of any critical examination necessary for the development of improvements. It is essential, therefore, that the descriptions of processes are accurate, clear and concise.

Process mapping (sometimes called ‘blueprinting’ or process modeling), in either a structured or unstructured format, is a prerequisite to obtaining an in-depth understanding of a process, before the application of quality management tools and techniques.

Process mapping is a communication tool that helps an individual or an improvement team understand a system or process and identify opportunities for improvement. So, those mapping by flowcharts are frequently used to communicate the components of a system or process to others whose skills and knowledge are needed in the improvement effort.

Purpose: The purpose of the flowchart analysis is to learn why the current system/ process operates in the

manner it does, and to prepare a method for objective analysis.

The flowchart techniques can also be used to study a simple system and how it would look if there were no problems.

A flowchart is a pictorial representation of the steps in a given process. The steps are presented graphically in sequence so that team members can examine the order presented and come to a common understanding of how the process operates.

Flowcharts are maps or graphical representations of a process. Steps in a process are shown with symbolic shapes, and the flow of the process is indicated with arrows connecting the symbols. Computer programmers popularized flowcharts in the 1960's, using them to map the logic of programs. In quality improvement work, flowcharts are particularly useful for displaying how a process currently functions or could ideally function. Flowcharts can help you see whether the steps of a process are logical, uncover problems or miscommunications, define the boundaries of a process, and develop a common base of knowledge about a process. Flowcharting a process often brings to light redundancies, delays, dead ends, and indirect paths that would otherwise remain unnoticed or ignored. But flowcharts don't work if they aren't accurate, if team members are afraid to describe what actually happens, or if the team is too far removed from the actual workings of the process.

Flowcharts can be used to describe an existing process or to present a proposed change in the flow of a process. Flowcharts are the easiest way to "picture" a process, especially if it is very complex. Flowcharts should include every activity in the process. A flowchart should be the first step in identifying problems and targeting areas for improvement.

THE FLOWCHART

1. Definition (1) A flowchart is a graphical representation of the specific steps, or activities, of a process. (2) A flowchart is a pictorial (graphical) representation of the process flow showing the process inputs,

activities, and outputs in order in which they occur.

Flowchart Categories: Theoretical flowchart is a flowchart that is prescribed by some overarching policy, procedure, or

operations manual. Theoretical flowcharts describes the way the process “should operate”. Actual flowchart depicts how the process is actually working.

Page 59: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Flow Chart-2

Best flowchart is where the people involved in the process develop the optimum (or close to optimum) flow based on the process expertise of the group.

Types of Flowchart: a. System Flowchart

A system flowchart is a pictorial representation of the sequence of operations and decisions that make up a process. It shows what being done in a process.

b. Layout Flowchart A layout flowcharts depicts the floor plan of an area, usually including the flow of paperwork or goods and the location of equipment, file cabinets, storage areas, and so on. These flowcharts are especially helpful in improving the layout to more efficiently utilize a space.

There are many varieties of flowcharts and scores of symbols that you can use. Experience has shown that there are three main types that work for almost all situations:

High-level flowcharts map only the major steps in a process for a good overview.

Detailed flowcharts show a step-by-step mapping of all events and decisions in a process.

Deployment flowcharts which organize the flowchart by columns, with each column representing a person or department involved in a process.

Page 60: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Flow Chart-3

The trouble spots in a process usually begin to appear as a team constructs a detailed flowchart.

Although there are many symbols that can be used in flowcharts to represent different kinds of steps, accurate flowcharts can be created using very few (e.g. oval, rectangle, diamond, delay, cloud) but flowcharts were used for years industrial engineering departments in 1930s, they become very popular in the 1960s when computer programmers used them extremely to map the logic of their various program.

Because a flowchart is a graphical representation, it is logical that there are symbols to represent the different types of activities. A Flowchart is employed to provide a diagrammatic picture, by means of a set of symbols, showing all the steps or stages in a process, project or sequence of events and its considerable assistance in documenting and describing a process as an aid to examination and improvement.

Although there are many different types of flowchart symbols, some of the more common symbols that are used in common as figure below.

2. Flowcharting a Process

Steps in Flowcharting a Process

1. Decide on the process to flowchart. 2. Define the boundaries of the process: the

beginning and the end. 3. Describe the beginning step of the process in

an oval. 4. Ask yourself "what happens next?" and add

the step to the flowchart as a rectangle. Continue mapping out the steps as rectangles connected by one-way arrows.

5. When a decision point is reached, write the decision in the form of a question in a diamond and develop the "yes" and "no" paths. Each yes/no path must reenter the process or exit somewhere.

6. Repeat steps 4 and 5 until the last step in the process is reached.

7. Describe the ending boundary/step in an oval.

To Construct an effective flowchart

1. Define the process boundaries with starting and ending points.

2. Complete the big picture before filling in the details.

3. Clearly define each step in the process. Be accurate and honest.

4. Identify time lags and non-value-adding steps. 5. Circulate the flowchart to other people

involved in the process to get their comments.

Page 61: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Flow Chart-4

When drawing a flowchart, constantly ask "what happens next?", "is there a decision made at this point?", "does this reflect reality?", "who else knows this process?", etc. When possible, do a walk-through of the process to see if any steps have been left out or extras added that shouldn't be there. The key is not to draw a flowchart representing how the process is supposed to operate, but to determine how it actually does operate. A good flowchart of a bad process will show how illogical or wasteful some of the steps or branches are.

3. Symbols in a Flowchart:

Defines the boundaries of a process; shows the start or stop of a process. The ellipse represents the beginning or ending (process boundaries) of a process

Designates a single step in a process. Briefly describe the step inside the box. The rectangle is used to denote a single step, or activity, in the process. The general symbol used to depict a processing operation

A diamond signifies a decision point in the process. Write the type of decision made inside the diamond in the form of a question. The question is answered by two arrows-- "yes" and "no" --which lead to two different branches. A diamond is used to denote a decision point in the process. The answer to the question is usually of the yes or no variety. It also includes variable type decisions such as which of several categories a process measurement falls into

A small circle that surrounds a letter signifies where you pick up a process on the same page; represents a connection. A small circle with either a number or letter inside denotes the point where the process is picked up again

An arrow denotes the direction, or flow, of activities in the process.

The general for that represents input or output media, operations, or processes is a parallelogram

4. How to Create a Flow Chart Using square Post-it notes (use squares as activities, rotated into a diamond, they represent a

decision), begin listing activities (verb-noun) and decisions (questions?). Once you have all of the notes on a sheet of easel paper, order them, from the first to the last. Then identify the group or department that does each set of steps on the top of the flowchart.

Realign the activities to fit under the appropriate group. Identify the major steps of the process on the left side (planning, doing, checking, acting to

improve). Realign the activities to coincide with the main steps of the process. Use arrows to connect the activities and decisions.

ADVANTAGES OF FLOWCHARTS

A flowchart functions as a communications tool. It provides an easy way to convey ideas between engineers, managers, hourly personnel, vendors, and others in the extended process. It is a concrete, visual way of representing complex systems.

A flowchart functions as a planning tool. Designers of processes are greatly aided by flowcharts. They enable a visualization of the elements or new or modified processes and their interactions while still in the planning stages.

Page 62: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Flow Chart-5

A flowchart provide and overview of the system. The critical elements and steps of the process are easily viewed in the context of a flowchart.

A flowchart removes unnecessary details and breaks down the system so designers and other get a clear, unencumbered look at what they’re creating.

A flowchart defines roles. It demonstrates the functions of personnel, workstations and sub-processes in a system. It also shows the personnel, operations, and locations involved in the process.

A flowchart demonstrates interrelationships. It shows how the elements of a process relate to each other.

A flowchart promotes logical accuracy. It enables to viewer to spot errors in logic. Planning is facilitated because designers have to clearly breakdown all of the elements and operations of the process.

A flowchart facilitates troubleshooting. It is an excellent diagnostic tool. Problems in the process, failures in the system, and barriers to communication can be detected by using a flowchart.

A flowchart documents a system. This record of a system enables anyone to easily examine and understand the system. Flowcharts facilitate changing a system because the documentation of what exist is available.

FLOWCHARTING a CLINIC VISIT

It cannot be emphasized enough how important it is for people to understand how work gets done. Managers do not understand the details (and the devil is in the details!) but are often held accountable for the outcomes of their area without a clear understanding of what contributes to the outcomes. Workers get very frustrated when things go wrong and often assume that others are deliberately trying to make their lives difficult. Conflict management is much easier to deal with when everyone can see problematic system!

Getting a picture of how a clinic functions begins with a high level flowchart, usually depicted horizontally and called the core process. It is the "backbone" of the clinic operations to which other sub-processes can be attached. Once the core process flowchart is done (this should take about an hour), it should be given to staff and physicians for their input. It is common for the flowcharting process to have many iterations, as revisions and changes almost always need to be made before the flowchart accurately reflects the actual process. The trick is to keep the initial charting "high level" and not get stuck in details just yet and to segmenting the core process into components, such as: access, intake, assessment/evaluation, treatment, discharge/follow-up, for manageability and to help identify indicators.

Once the core process is agreed upon by those who work in the clinic, the next step is to decide which processes need attention (the bottlenecks for patient flow).

The hardest part of flowcharting is keeping it simple enough to be workable, but with enough detail to show the trouble spots. And therefore that the flowchart for sub-processes be vertical (as opposed to the core process that goes horizontally), keeping the smoothest flow to the left of the page, and the complexity to the right. This does not always work out perfectly, but when it does, it allows us to see the non-value added steps very quickly. Remember also that to leave lots of room for comments, since revisions are the rule. The more buy-in, the more revisions you have. Another trick is to capture the "issues" that surface when you're trying to create a flowchart. These issues may not be boxes in the process, but they will eventually be clues as to what needs to be addressed. For example, patients forget to bring their x-rays. In order they do not slow us from moving ahead with the processes steps issues on the bottom of the flowchart, then it should be write down these issue on the bottom of flowchart.

Once a flowchart has been created and agreed upon, the next step is to study to see where improvements can be made. It is helpful to create a flowchart of the actual process first so you can truly see where the improvement opportunities are. If you are creating a new process, then the ideal flowchart can be created first. Having the processes clearly outlined will allow everyone to see what is suppose to happen and will also allow you to identify performance indicators/measures that will tell you how your clinic and processes are functioning.

Performance measures are best identified by dividing your flowchart into segments. These segments can be gleaned from the process itself. For example, in the clinic, there is a segment that deals with patients calling in for appointments, a second segment that involves the registration of the patient when he/she arrives, a third segment that is the actual office visit, the a discharge segment. The question you want to ask is: how are each of these segments working? How would I know:? If you want to answer that question, you need to look at where performance measures would fit.

Page 63: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Check Sheet-1

CHECK SHEET

Definition: A simple data collection form consisting of multiple categories with definitions. Data are entered on the form with a simple tally mark each time one of the categories occurs. A check sheet is a structured, prepared form for collecting and analyzing data. This is a generic tool that can be adapted for a wide variety of purposes.

A check sheet is a data-recording device. A Check sheet is essentially a list of categories. As events in these categories, a check or mark is placed on the check sheet in the appropriate category.

When to Use When data can be observed and collected repeatedly by the same person or at the same

location. When collecting data on the frequency or patterns of events, problems, defects, defect location,

defect causes, etc. When collecting data from a production process.

Purpose: To facilitate the collection and analysis of data. A check sheet is a simple means of data collection. The most straightforward check sheet is simply to make a list of items that you expect will appear in a process and to mark a check beside each item when it does appear. This type of data collection can be used for almost anything, from checking off the occurrence of particular types of defects to the counting of expected items (e.g., the number of times the telephone rings before being answered). The main purpose of check sheets is to ensure that the data collected carefully and accurately by operating personnel for process control and problem solving. Whenever possible, check sheets are also designed to show location as below . (Check sheet for plastic

mold nonconformities)

Procedure 1. Decide what event or problem will be observed. Develop operational definitions. 2. Decide when data will be collected and for how long. 3. Design the form. Set it up so that data can be recorded simply by making check marks or Xs or

similar symbols and so that data do not have to be recopied for analysis. 4. Label all spaces on the form. 5. Test the check sheet for a short trial period to be sure it collects the appropriate data and is

easy to use. 6. Each time the targeted event or problem occurs, record data on the check sheet.

How to Construct:

1. Clearly define the objective of the data collection. 2. Determine other information about the source of the data that should be recorded, such as

shift, date, or machine. 3. Determine and define all categories of data to be collected. 4. Determine the time period for data collection and who will collect the data. 5. Determine how instructions will be given to those involved in data collection. 6. Design a check sheet by listing categories to be counted. 7. Pilot the check sheet to determine ease of use and reliability of results. 8. Modify the check sheet based on results of the pilot.

Tips:

Use Ishikawa diagrams or Brainstorming to determine categories to be used on the check sheet.

Page 64: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Check Sheet-2

Construct an operational definition of each category to ensure data collected is consistent. Make check sheet as clear and easy to use as possible. Spend adequate time explaining the objective of the data collection to those involved in

recording the data to ensure the data will be reliable. Data collected in this format facilitates easy Pareto analysis.

Here is an example of a check sheet:

DATE FLOORSHIFT

1/13/89 ~ 1/19/89 47/3

TRAY DELIVERY PROCESS

CHECKSHEET MON TUE WED THU FRI SAT SUN TOT

Tray Disabled | | | 3 Production Sheet Inaccurate | | | 3 Menu Incorrect | | || | 5 Diet Order Changed | | 3 Wrong Order Delivered | | 4 Patient Asleep ||| | ||||| | || ||| | 16 Patient Out of Room ||||| ||| || | | || || 16 Doctor Making Rounds || |||| | | | | 11 Patient Not Hungry | | || | ||| | ||||| 14 Cart Faulty | | | 3 Plate Warmer Broken || || ||| || |||| | || 16 Heating Unit Broken | | | | 4 Thermometer Miscalibrated | | 2 Nursing Unavailable ||| || ||| |||| || ||| ||| 20 Cart Unavailable || ||| || || || 11 Elevator Malfunction || ||| | 6

TOTAL 22 16 25 13 21 17 20 134

The figure below shows a check sheet used to collect data on telephone interruptions. The tick marks were added as data was collected over several weeks.

Below the sample of a check sheet form which is used by inspector/operator in manufacturing area.

Page 65: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Check Sheet-3

EXAMPLE : 1

In a new plant, time cards need to be submitted to the payroll office within 30 minutes after the close of the last shift on Friday. After twenty weeks of operation, the payroll office complaint to the plant manager that this process is not being adhered to and as a result the payroll checks are not being processed on time and workers are starting to complain. The requirement is = 30 minutes.

The sample data that gathers by in payroll office as follow as:

30 30 27 33

30 30 32 28

28 30 32 29

29 30 31 29

31 30 31 30

27 = 1

28 = 2

29 = 3

30 = 8

31 = 3

32 = 2

33 = 1

TOTAL = 20

EXAMPLE : 2

Below is the data that being taken from the Pressing machine that produce the ceramic part for IC ROM before it’s sent to funnel burner .

9.85 10.00 9.95 10.00 10.00 10.00 9.95 10.15 9.85 10.05

9.90 10.05 9.85 10.10 10.00 10.15 9.90 10.00 9.80 10.00

9.90 10.05 9.90 10.10 9.95 10.00 9.90 9.90 9.90 10.00

9.90 10.10 9.90 10.20 9.95 10.00 9.85 9.95 9.95 10.10

9.95 10.15 9.95 10.00 9.90 10.00 10.00 10.10 9.95 10.15

9.80 = 1

9.85 = 4

9.90 = 10

9.95 = 9

10.00 = 13

10.05 = 3

10.10 = 5

10.15 = 4

10.20 = 1

TOTAL = 50

Page 66: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Check Sheet-4

EXAMPLE : 3

Below is the 100 metal thickness data taken about the optic component to determine the data distribution.

3.56 3.46 3.48 3.50 3.42 3.43 3.52 3.49 3.44 3.50

3.48 3.56 3.50 3.52 3.47 3.48 3.46 3.50 3.56 3.38

3.41 3.37 3.47 3.49 3.45 3.44 3.50 3.49 3.46 3.46

3.55 3.52 3.44 3.50 3.45 3.44 3.48 3.46 3.52 3.46

3.48 3.48 3.32 3.40 3.52 3.34 3.46 3.43 3.30 3.46

3.59 3.63 3.59 3.47 3.38 3.52 3.45 3.48 3.31 3.46

3.40 3.54 3.46 3.51 3.48 3.50 3.68 3.60 3.46 3.52

3.48 3.50 3.56 3.50 3.52 3.46 3.48 3.46 3.52 3.56

3.52 3.48 3.46 3.45 3.46 3.54 3.54 3.48 3.49 3.41

3.41 3.45 3.34 3.44 3.47 3.47 3.41 3.48 3.54 3.47

XH = 3.68 ; XL = 3.30 R = 3.68 – 3.30 = 0.38

Range Center

No Lowest~Highest MidPoint

1 3.275 ~ 3.325 3.30 = 3

2 3.325 ~ 3.375 3.35 = 3

3 3.375 ~ 3.425 3.40 = 9

4 3.425 ~ 3.475 3.45 = 32

5 3.475 ~ 3.525 3.50 = 38

6 3.525 ~ 3.575 3.55 = 10

7 3.575 ~ 3.625 3.60 = 3

8 3.625 ~ 3.675 3.65 = 1

9 3.675 ~ 3.725 3.70 = 1

TOTAL = 150

EXERCISE : 1

In his last 70 games a professional basketball player made the follwing scores:

10 17 9 17 18 20 16

7 17 19 13 15 14 13

12 13 15 14 13 10 14

11 15 14 11 15 15 16

9 18 15 12 14 13 14

13 14 16 15 16 15 15

14 15 15 16 13 12 16

10 16 14 13 16 14 15

6 15 13 16 15 16 16

12 14 16 15 16 13 15

a. Construct the check sheet for table above!

b. Calculate the mean and standard deviation of the distribution, modus

Page 67: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Check Sheet-5

EXERCISE : 2

The ABC Company is planning to analyze the average weekly wage distribution of its 58 employees during fiscal year 2003. The 58 weekly wages are available as raw data corresponding to the alphabetic order of the employees’ names as below:

241 253 312 258 264 265

316 242 257 251 282 305

298 276 284 304 285 307

263 301 262 272 271 265

249 229 253 285 267 250

288 248 276 280 252 258

262 314 241 257 250 275

275 301 283 249 288 275

281 276 289 228 275

170 289 262 282 260

a. Construct the check sheet for table above!

b. Calculate the mean and standard deviation of the distribution, modus

EXERCISE : 3

Thickness measurements on pieces of silicon (mmX0.001)

790 1170 970 940 1050 1020 1070 790

1340 710 1010 770 1020 1260 870 1400

1530 1180 1440 1190 1250 940 1380 1320

1190 750 1280 1140 850 600 1020 1230

1010 1040 1050 1240 1040 840 1120 1320

1160 1100 1190 820 1050 1060 880 1100

1260 1450 930 1040 1260 1210 1190 1350

1240 1490 1490 1310 1100 1080 1200 880

820 980 1620 1260 760 1050 1370 950

1220 1300 1330 1590 1310 830 1270 1290

1000 1100 1160 1180 1010 1410 1070 1250

1040 1290 1010 1440 1240 1150 1360 1120

980 1490 1080 1090 1350 1360 1100 1470

1290 990 790 720 1010 1150 1160 850

1360 1560 980 970 1270 510 960 1390

1070 840 870 1380 1320 1510 1550 1030

1170 920 1290 1120 1050 1250 960 1550

1050 1060 970 1520 940 800 1000 1110

1430 1390 1310 1000 1030 1530 1380 1130

1110 950 1220 1160 970 940 880 1270

750 1010 1070 1210 1150 1230 1380 1620

1760 1400 1400 1200 1190 970 1320 1200

1460 1060 1140 1080 1210 1290 1130 1050

1230 1450 1150 1490 980 1160 1520 1160

1160 1700 1520 1220 1680 900 1030 850

a. Construct the check sheet for table above!

b. Calculate the mean and standard deviation of the distribution, modus

Page 68: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Pareto-1

PARETOANALYSIS (The 80:20 Rule)

INTRODUCTION.

The Pareto effect is named after Vilfredo Pareto, an economist and sociologist who lived from 1848 to 1923. Originally trained as an engineer he was a one time managing director of a group of coalmines. Later he took the chair of economics at Lausanne University, ultimately becoming a recluse. Mussolini made him a senator in 1922 but by his death in 1923 he was already at odds with the regime. Pareto was an elitist believing that the concept of the vital few and the trivial many extended to human beings.

This is known as the 80 : 20 rule, can be observed in action so often that it seems to be almost a universal truth. As several economists have pointed out, at the turn of the century the bulk of the country’s wealth was in the hands of a small number of people.

Much of his writing is now out of favour and some people would like to re-name the effect after Mosca, or even Lorenz. However it is too late now – the Pareto principle has earned its place in the manager’s kit of productivity improvement tools.

This fact gave rise to the Pareto effect or Pareto’s law: a small proportion of causes produce a large proportion of results. Thus frequently a vital few causes may need special attention wile the trivial many may warrant very little. It is this phrase that is most commonly used in talking about the Pareto effect – ‘the vital few and the trivial many’. A vital few customers may account for a very large percentage of total sales. A vital few taxes produce the bulk of total revenue. A vital few improvements can produce the bulk of the results.

This method stems in the first place from Pareto’s suggestion of a curve of the distribution of wealth in a book of 1896. Whatever the source, the phrase of ‘the vital few and the trivial many’ deserves a place in every manager’s thinking. It is itself one of the most vital concepts in modern management. The results of thinking along Pareto lines are immense.

In practically every industrial country a small proportion of all the factories employ a disproportionate number of factory operatives. In some countries 15 percent of the firms employ 70 percent of the people. This same state of affairs is repeated time after time. In retailing for example, one usually finds that up to 80 percent of the turnover is accounted for by 20 percent of the lines.

For example, we may have a large number of customer complaints, a lot of shop floor accidents, a high percentage of rejects, and a sudden increase in costs etc. The first stage is to carry out a Pareto analysis. This is nothing more than a list of causes in descending order of their frequency or occurrence. This list automatically reveals the vital few at the top of the list, gradually tailing off into the trivial many at the bottom of the list. Management’s task is now clear and unavoidable: effort must be expended on those vital few at the head of the list first. This is because nothing of importance can take place unless it affects the vital few. Thus management’s attention is unavoidably focussed where it will do most good.

Another example is stock control. You frequently find an elaborate procedure for stock control with considerable paperwork flow. This is usually because the systems and procedures are geared to the most costly or fast-moving items. As a result trivial parts may cost a firm more in paperwork than they cost to purchase or to produce. An answer is to split the stock into three types, usually called A, B and C. Grade A items are the top 10 percent or so in money terms while grade C are the bottom 50-75 percent. Grade B are the items in between. It is often well worthwhile treating these three types of stock in a different way leading to considerable savings in money tied up in stock.

Production control can use the same principle by identifying these vital few processes, which control the manufacture, and then building the planning around these key processes. In quality control concentrating in particular on the most troublesome causes follows the principle. In management control, the principle is used by top management looking continually at certain key figures.

Page 69: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Pareto-2

Thus it is clear that the Pareto concept – ‘the vital few and the trivial many’ – is of utmost importance to management.

Pareto charts show where effort can be focused for maximum benefit. It may take two or more Pareto charts to focus the problem to a level that can be successfully analyzed.

THE PARETO CHART

A Pareto chart is a graphical representation that displays data in order of priority. It can be a powerful tool for identifying the relative importance of causes, most of which arise from only a few of the processes, hence the 80:20 rule. Pareto Analysis is used to focus problem solving activities, so that areas creating most of the issues and difficulties are addressed first.

The Pareto chart combines a bar graph with a cumulative line graph. The bars are placed from left to right in descending order. The cumulative line graph shows the percent contribution of all preceding bars. However, the graph has the advantage of providing a visual impact of those vital few characteristics that need attention.

Below is a sample Pareto chart

How to Construct:1. Determine the method of classifying the data: by problem, cause, type, nonconformity, and so

forth. Decide the problem which is to be analyzed. 2. Decide the period over which data are to be collected. 3. Identify the main course or categories of the problem. Collect data for an appropriate time

interval.

Page 70: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Pareto-3

4. Tabulate the frequency of each category and list in descending order of frequency. (if there are too many categories it is permissible to group some into a miscellaneous category, for the purpose of analysis and presentation).

5. Arrange the data as in a bar chart. Summarize the data and rank order categories from largest to smallest.

6. Construct the diagram with the columns arranged in order of descending frequency. 7. Determining cumulative totals and percentages and construct the cumulative percentage curve.

COMMON PARETO CHART ERRORS

1. Stopping Your Pareto Chart Analysis Too Soon

A common error is to stop at too high a level. Double the power of Pareto's 80-20 rule by applying the 4-50 rule. Look at the 20% within the 20% (4%) that represents over half of the problem. You can do this by drawing another Pareto chart using just the data representing your big bar. In the example below the blue bar represents the 4% of the business that is causing over 50% of the problem.

2. Settling for a Perflato chart instead of a Pareto Chart

If the left bar or two left bars on your Pareto chart are not significantly bigger than the other bars on the chart then you have a "perflato" chart. Try organizing your data by another attribute (e.g subtotal by geography or time of day instead of by error type) and see if you can get a Pareto chart that helps you better focus your efforts.

For example, one client created a Pareto chart of insurance claim errors by error code and found no one code that accounted for a significant amount of errors. They then created a Pareto chart of errors by insurance company and found that one company was the source of most of the errors. Once they approached the company with compelling data it was easy to convince them their process needed fixing.

CONCLUSION

Even in circumstances which do not strictly conform to the 80 : 20 rule the method is an extremely useful way to identify the most critical aspects on which to concentrate. When used correctly Pareto Analysis is a powerful and effective tool in continuous improvement and problem solving to separate the ‘vital few’ from the ‘many other’ causes in terms of cost and/or frequency of occurrence.

Difficulties associated with Pareto Analysis:

Misrepresentation of the data. Inappropriate measurements depicted. Lack of understanding of how it should be applied to particular problems. Knowing when and how to use Pareto Analysis. Inaccurate plotting of cumulative percent data.

Page 71: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Pareto-4

Overcoming the difficulties

Define the purpose of using the tool. Identify the most appropriate measurement parameters. Use check sheets to collect data for the likely major causes. Arrange the data in descending order of value and calculate % frequency and/or cost and

cumulative percent. Plot the cumulative percent through the top right side of the first bar. Carefully scrutinise the results. Has the exercise clarified the situation?

It is the discipline of organising the data that is central to the success of using Pareto Analysis. Once calculated and displayed graphically, it becomes a selling tool to the improvement team and management, raising the question why the team is focusing its energies on certain aspects of the problem.

EXAMPLE :1 Checklist frequency Tally of causes of rewrites of letters

CAUSE of REWRITE NUMBER of

OCCURRENCES % %CUM

Misspelled words 33 46.5% 46.5%

Incorrect punctuation 21 29.6% 76.1%

Incorrect spacing 8 11.3% 87.3%

Incorrect signature block 6 8.5% 95.8%

Incorrect heading 3 4.2% 100.0%

71

33

21

8 6 3

46.5%

76.1%

87.3%

95.8%100.0%

0%

20%

40%

60%

80%

100%

Mispelled words Incorrect

punctutation

Incorrect spacing Incorrect signature

block

Incorrect heading

Cause of Rewrite

% O

ccu

ran

ce

0

9

18

27

36

45

54

63

72

Nu

mb

er

0f

Occu

ran

ce

71

Page 72: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Pareto-5

EXAMPLE :2

COATING MACHINE NONCONFORMITY

Machine NC % % CUM

M/C-1 35 14.77% 14.77%

M/C-2 51 21.52% 36.29%

M/C-3 44 18.57% 54.85%

M/C-4 47 19.83% 74.68%

M/C-5 29 12.24% 86.92%

M/C-6 31 13.08% 100.00%

TOTAL 237

COATING MACHINE NONCONFORMITY

51 47 44 35 31 29

21.52%

59.92%

74.68%

87.76%

100.00%

41.35%

0

40

80

120

160

200

240

M/C-1 M/C-2 M/C-3 M/C-4 M/C-5 M/C-6

MACHINE

NU

MB

ER

OF

NO

N C

ON

FO

RM

ITY

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

% N

ON

CO

NF

OR

MIT

Y

EXAMPLE :3 Summary sheet for Shock Absorber Line

X-Axis Category No.

Defective % %Cum

Spot Weld 41 48.81% 0.58%

Leakers 23 27.38% 76.19%

Orifice 8 9.52% 85.71%

Steel (crimp) 6 7.14% 92.86%

Oil (Drift) 5 5.95% 98.81%

Rod (Chrome) 1 1.19% 100.00%

TOTAL 84

238

Page 73: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Pareto-6

Summary Sheet for Shock Absorber Line

41

23

8 6 5 1

48.81%

76.19%

85.71%

92.86%

98.81% 100.00%

0

14

28

42

56

70

84

Spot Weld Leakers Orifice Steel

(Crimp)

Oil (Drift) Rod

(Chrome)

Category Defects

Nu

mb

er

of

Defe

cts

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

% D

efe

cts

EXAMPLE :4 Listed below is the defect types of corded phone and the repair cost expense.

Defect Type No. Defects Cost/Defect($)

Impurities 278 4

Dented 145 0.5

Crack 30 40

Scratch 25 1

Broken 20 10

Bend 2 10

a. Make the pareto for Number Defects. b. Make the pareto for Cost/Defects. c. Make the pareto coded both

Defect Type Defects % Defects Cost/Defect($) % C/D % CUM Defects % CUM C/D

Impurities 278 55.60% 5 7.35% 55.60% 7.35%

Dented 145 29.00% 2 2.94% 84.60% 10.29%

Crack 30 6.00% 40 58.82% 90.60% 69.12%

Scratch 25 5.00% 1 1.47% 95.60% 70.59%

Broken 20 4.00% 10 14.71% 99.60% 85.29%

Bend 2 0.40% 10 14.71% 100.00% 100.00%

TOTAL 500 100.00% 68 100.00% 100.00% 100.00%

Page 74: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Pareto-7

DEFECTS

278

145

30 25 20 2

55.60%

84.60%

90.60%95.60%

99.60% 100.00%

0

100

200

300

400

500

Impurities Dented Crack Scratch Broken Bend

DEFECT TYPES

N

o.

of

DE

FE

CT

S

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

% D

EF

EC

TS

Defect Type Defects Defects

CUM Defects

Impurities 278 55.60% 55.60%

Dented 145 29.00% 84.60%

Crack30

6.00% 90.60%

Scratch 25

5.00% 95.60%

Broken 20 4.00% 99.60%

Bend 2

0.40% 100.00%

TOTAL 500 100.00% 100.00%

COST/DEFECT

40

10 10 2 2 1

61.54%

76.92%

92.31% 95.38%98.46% 100.00%

0

15

30

45

60

Crack Broken Bend Dented Dented Scratch

DEFECT TYPES

CO

ST

/DE

FE

CT

S

0%

20%

40%

60%

80%

100%

% C

OS

T/D

EF

EC

TS

DefectType

Cost/Defect ($) C/D

CUM C/D

Crack 40 61.54% 61.54%

Broken 10 15.38% 76.92%

Bend 10 15.38% 92.31%

Dented 2 3.08% 95.38%

Dented 2 3.08% 98.46%

Scratch 1 1.54% 100.00%

TOTAL 65 100.00% 100.00%

DefectType % Defects X Cost/Defects

% (X10) Defects

Cum D X C/D

Impurities 2.78 5.56 44.48%

Crack 2.40 0.60 82.88%

Dented 0.58 2.90 92.16%

Broken 0.40 0.40 92.16%

Scratch 0.05 0.50 98.56%

Bend 0.04 0.04 98.56%

TOTAL 6.25 100 100.00%

Page 75: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Pareto-8

INCREMENTAL COST

0.05 0.04

6.25

0.400.58

2.40

2.78

82.88%

92.16% 92.16%

98.56% 98.56%

100.00%

44.48%

0.00

1.50

3.00

4.50

6.00

Impurities Crack Dented Broken Scratch Bend TOTAL

DEFECT TYPES

CO

ST

DE

FE

CT

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

% C

UM

o

f C

OS

T D

EF

EC

T

% Defects X Cost/Defects

% (X10) Defects

Cum D X C/D

EXERCISE :1 Two partners in an upholstery business are interested in decreasing the number of complaints from customers who have had furniture reupholstered by their staff. For the past six months, they have been keeping detailed records of the complaints and what had to be done to correct the situations. To help their analyst of which problems to attack first, they decide to create several pareto charts. Use the following table to cerate pareto charts for the number of complaints, the percentage of complaints, and the dollar loss associated with complaints.

CATEGORY No. of

Complaints % of

Complaints Dollar Loss

Loose threads 14 28 294

Incorrect hemming 8 16 216

Material flaws 2 4 120

Stitching flaws 6 12 126

pattern alignment errors 4 8 240

Color mismatch 2 4 180

Trim errors 6 2 144

Button problems 3 6 36

Miscellaneous 5 10 60

a. Make the pareto diagram & Discussed its

Page 76: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Pareto-9

EXERCISE :2 During the past month, a customer-satisfaction survey was given to 200 customer at local fast-food restaurant. The following complaints were lodged:

COMPALINT No. Of

Complaint

Cold food 105

Flimsy Utensils 20

Food taste bad 10

Salad not fresh 94

Poor service 15

Food greasy 9

Lack of courtesy 5

Lack of cleanliness 25

a. Make the pareto diagram

EXERCISE :3 Create a pareto diagram using the table as listed below and the following information about the individual costs associated with correcting each type of non conformity. Based on your pareto diagram showing the total cost associated with each type of nonconformity, where should PT. Tool to be concentrating their improvement effort?

Type of Problems Qty Cost (RM)

Scratches 8 145

Dents 2 200

Surface finish disfigurations in plant 4 954

Damage casing 1 6500

Wrong Color 1 200

a. Make the pareto diagram and discuss its.

Page 77: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-1

HISTOGRAM

INTRODUCTION

In Industry, business, and government the mass of data that have been collected is voluminous. Some means of summarizing the data are needed to show value the data tend to cluster about and how the data are dispersed or spread out. Two techniques are needed to accomplish this summarization of data, as follow as:

Graphical The graphical technique is a plot or picture of a frequency distribution, which is a summarization of how the data points (observations) occur within each subdivision of observe values.

Analytical Analytical technique summarize dara by computing a measure of the tendency 9average, median, and mode) and a measure of the dispersion (range and standard deviation).

Frequency distributions of variables data are commonly presented in frequency polygons (histograms). In histogram, vertical bars are drawn, each with a width corresponding to the width of the class interval and a heigh corresponding to the frequency of that interval. The bars share common sides with no space between them.

Frequency distributions of attribute date are commonly presented in bar chart. For attribute data, a bar chart is used to graphically display the data. It is constructed in exactly the same way as a histogram, except that instead of using a vertical bar spanning the entire class interval, we use a line or bar centered on each attribute category.

A frequency distribution shows how often each different value in a set of data occurs. A histogram is the most commonly used graph to show frequency distributions. It looks very much like a bar chart, but there are important differences between them.

SCOPE & BACKGROUND

In nature and in every process, variability exist-collecting data on process and organizing the data appropriately are first steps in bringing recognition to the presence and type of variation. One simple and useful tool for graphically representing variation in given is set of data a histogram.

The histogram evolved to meet the need for evaluating data that occurs at a certain frequency. This is possible because the histogram allows for a concise portrayal of information in a bar graph format. The histogram is a powerful engineering tool when routinely and intelligently used. The histogram clearly portrays information on location, spread, and shape that enables the user to perceive subtleties regarding the functioning of the physical process that is generating the data. It can also help suggest both the nature of, and possible improvements for, the physical mechanisms at work in the process.

Histogram is a graphical display of tabulated frequencies. A histogram is a graphical summary of the frequency distribution of the data. When measurement are taken from a process, they can be summarized by using a histogram. Data are organized in a histogram to allow those investigating the process to see any patterns in the data would be difficult to see in a simple table of numbers. Each internal on histogram shows the total number of observations made in separate class.

A histogram is the graphical version of a table which shows what proportion of cases fall into each of several or many specified categories. The categories are usually specified as nonoverlapping intervals of some variable. A histogram divide the range of data into intervals and show the number, or percentage, of observation that fall into each interval. The categories (bars) must be adjacent. Histogram describe the variation in the process. The histogram graphically estimates the process capability, and if required, the relationship to the specifications and the nominal (target). A Histogarm consist of a set of rectangles that represent the frequency of observed values in each category.

Page 78: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-2

Histogram have certain identifiable characteristics. One charateristic of the distribution concerns the symmetry or lack of symmetry of the data. A final characteric concerns the number of modes, or peaks, in the data. There can be one mode, two modes (bi-modal), or multiple modes. A histogram is like snapshot of the process showing the variation. Histograms can determine the process capability, compare with specifications, suggest the shape of the population, and indicate discrepancies in the data, such as gaps.

Histograms show the spread, or dispersion, of data. The Excel histogram uses variable data to determine process capability. The customer's upper specification (USL) and lower specification limits (LSL) determine process capability. With attribute data (i.e., defects), capability assumes that the process must deliver zero defects. It also suggest the shape of the population and indicates if there are any gaps in the data.

WHEN TO USE

When the data are numerical. When you want to see the shape of the data’s distribution, especially when determining

whether the output of a process is distributed approximately normally. When analyzing whether a process can meet the customer’s requirements. When analyzing what the output from a supplier’s process looks like. When seeing whether a process change has occurred from one time period to another. When determining whether the outputs of two or more processes are different. When you wish to communicate the distribution of data quickly and easily to others.

Page 79: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-3

OBJECTIVEThe main purpose of histograms is to provide clues and information for the reduction of variation. This come mainly from identification and interpretation of patter of variation. There 2 types of variations pattern, as follow as:

Random (from chance or common causes) Because the are repeated frequently in real life, have been found to be quite useful and have, therefore been documented and quantified.

Non-random (from assignable or special causes) Non-random patterns are patterns of error and are used in SPC, especially control charting, to assist in the reduction of variation.

The purpose of a histogram is to graphically summarize the distribution of a univariate data set. The histogram graphically shows the following:

1. center (i.e., the location) of the data; 2. spread (i.e., the scale) of the data; 3. skewness of the data; 4. presence of outliers; and 5. presence of multiple modes in the data.

These features provide strong indications of the proper distributional model for the data. The probability plot or a goodness-of-fit test can be used to verify the distributional model.

1. Cp, Cpk, Pp, Ppk process capability analysis 2. USL, LSL3. Mean, Median, Mode, Cpm, Standard Dev, Max, Min, Z bench, Cpk. Upper, Cpk. Lower, Z

target, % defects, PPM, Expected, Normal Distribution curve.

SHAPE DISTRIBUTION TYPE

How the shape can provide clues to the cause(s) of variation, as follow as:

1. Bell-Shaped (normal, Gaussian) Any deviation from this pattern is usually abnormal in some way (assignable or special causes) and can usually, therefore, provide clues about the variation.

2. Bi-Modal This distribution is usually a mixture of 2 processes (a combination of 2 distributions) such as identical parts from 2 different machines.

Page 80: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-4

3. The Plateau Distribution (that on top, slight values) This usually results from a mixture of many different processes, and can be analyzed by diagramming the flow and observing the processes.

4. The Comb Distribution (alternating high and low values) This usually indicated errors in measurement, errors in the organization of the data, and/or rounding errors.

5. The Skewed Distribution (a long tail on one side) If the long tail is to left, the distribution is negative skewed. But if it to the right, it is then called as positively skewed. Causes of skewed distributions include short cycle task, one-side specification limits, and practical limits on one side only.

6. The Truncated Distribution (a smooth curve with an abrupt ending on one side) It is often caused by external forces to the process, such as screening.

7. The Isolated Peak Distribution ( a small, separate, group of data to one side of the parent group)

8. The Edge-Peaked Distribution ( a peak right at the edges) This usually means that data from an otherwise long tail have been lumped together at one point data from outside the limit have been recorded as being inside the limit).

The most common form of the histogram is obtained by splitting the range of the data into equal-sized bins (called classes). Then for each bin, the number of points from the data set that fall into each bin are counted. That is

Vertical axis: Frequency (i.e., counts for each bin) Horizontal axis: Response variable

The classes can either be defined arbitrarily by the user or via some systematic rule. A number of theoretically derived rules have been proposed by Scott (Scott 1992).

The cumulative histogram is a variation of the histogram in which the vertical axis gives not just the counts for a single bin, but rather gives the counts for that bin plus all bins for smaller values of the response variable.

Both the histogram and cumulative histogram have an additional variant whereby the counts are replaced by the normalized counts. The names for these variants are the relative histogram and the relative cumulative histogram.

There are two common ways to normalize the counts.

1. The normalized count is the count in a class divided by the total number of observations. In this case the relative counts are normalized to sum to one (or 100 if a percentage scale is used). This is the intuitive case where the height of the histogram bar represents the proportion of the data in each class.

2. The normalized count is the count in the class divided by the number of observations times the class width. For this normalization, the area (or integral) under the histogram is equal to one. From a probabilistic point of view, this normalization results in a relative histogram that is most akin to the probability density function and a relative cumulative histogram that is most akin to the cumulative distribution function. If you want to overlay a probability density or cumulative distribution function on top of the histogram, use this normalization. Although this normalization is less intuitive (relative frequencies greater than 1 are quite permissible), it is the appropriate normalization if you are using the histogram to model a probability density function.

MATHEMATICAL DEFINITION

In a more general mathematical sense, a histogram is simply a mapping that counts the number of observations that fall into various disjoint categories (known as bins), whereas the graph of a histogram is merely one way to represent a histogram. Thus, if we let N be the total number of observations and nbe the total number of bins, the histogram hk meets the following conditions:

Page 81: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-5

where k is an index over the bins.

Cumulative Histogram A cumulative histogram is a mapping that counts the cumulative number of observations in all of the bins up to the specified bin. That is, the cumulative histogram Hk of a histogram hk is defined as:

Rules of Thumb Most people use 7 to 10 classes for histograms. However, from time to time the following rules of thumb have been used to chunk the data. Where n is the number of observations in the sample.

Nclass = 10logn

...with varying degrees of success. The final technique does not perform well with n < 30.

The bin width (or the number of bins) of a histogram can be selected by using the formula (2k v) / 2, where k and v are the mean and variance of the number of data points in the bins. The optimal bin

width is the one that minimizes the formula.

QUESTIONS

The histogram can be used to answer the following questions:

1. What kind of population distribution do the data come from? 2. Where are the data located? 3. How spread out are the data? 4. Are the data symmetric or skewed? 5. Are there outliers in the data?

INTERPRETATION

When combined with the concept of the normal curve and the knowledge of a particular process, the histogram becomes an effective, practical working tool in the early stages of data analysis. A histogram may be interpreted by asking three questions:

1. Is the process performing within specification limits? 2. Does the process seem to exhibit wide variation? 3. If action needs to be taken on the process, what action is appropriate?

The answer to these three questions lies in analyzing three characteristics of the histogram.

1. How well is the histogram centered? The centering of the data provides information on the process aim about some mean or nominal value.

2. How wide is the histogram? Looking at histogram width defines the variability of the process about the aim.

3. What is the shape of the histogram? Remember that the data is expected to form a normal or bell-shaped curve. Any significant change or anomaly usually indicates that there is something going on in the process which is causing the quality problem.

Page 82: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-6

EXAMPLES of TYPICAL DISTRIBUTIONS

NORMAL

Depicted by a bell-shaped curve o most frequent measurement appears as center of distribution o less frequent measurements taper gradually at both ends of distribution

Indicates that a process is running normally (only common causes are present).

BI-MODAL

Distribution appears to have two peaks May indicate that data from more than process are mixed together

1. materials may come from two separate vendors 2. samples may have come from two separate machines

CLIFF-LIKE

Appears to end sharply or abruptly at one end Indicates possible sorting or inspection of non-conforming parts.

SAW-TOOTHED

Also commonly referred to as a comb distribution, appears as an alternating jagged pattern Often indicates a measuring problem

1. improper gage readings 2. gauge not sensitive enough for readings. .

SKEWED

Appears as an uneven curve; values seem to taper to one side

It is worth mentioning again that this or any other phase of histogram analysis must be married to knowledge of the process being studied to have any real value. Knowledge of the data analysis itself does not provide sufficient insight into the quality problem.

Page 83: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-7

CREATING HISTOGRAM

1. Collect the consecutive data points from a process. For the histogram to be representative, use the histogram worksheet to set up the histogram. It will help you determine the number of bars, the range of numbers that go into each bar and the labels for the bar edges.

2. Choose the cell interval and determine the number of cells.

The cell interval is the distance between cells and precisely defined as the difference between successive lower boundaries, successive upper boundaries, or successive midpoints.

a. Determine the rage : = – XH = highest number XL = lowest number

R = Range

The all interval must be an odd number and have the same of decimal points as original data. Suggestion: odd interval recommended so that the midpoint values will be to the same number of decimal places as the data value.

b. Determine the cell interval :

b.1. STURGIS RULE : ____ ________ ____ ____

b.2. TRIAL & ERROR : __ ___ i = the cell interval h = the number of cells

or

__ __+ 1

or

Choose that cell interval which has the number of cells closest to the square root of the sample size. If two are equally, choose the one that has the lowest number of cells.

= n = the number of data points

c. Determine the cell midpoints : + __ i___ Mp = midpoint for lowest cell

Determine the midpoint of the lowest cell. This is the assumed average of the cell (all measurements within the cell are assumed to have this value). The midpoint is calculated by adding the upper and lower boundaries and dividing by 2.

d. Determine the cell boundaries.

Choose the lower boundary of the lowest cell. Allocation of empty data points is made between the lowest and the highest cells (if rounding had been necessary). This amount of allocation depends on the amount of rounding done.

Determine the upper boundary of the lowest cell by adding the cell interval to the lower boundary and subtracting 1 from the last digit.

e. Post the cell frequency

3. Draw x- and y-axes on graph paper. Mark and label the y-axis for counting data values. Mark and label the x-axis with the values from the worksheet. The spaces between these numbers will be the bars of the histogram. Do not allow for spaces between bars.

4. For each data point, mark off one count above the appropriate bar with an X or by shading that portion of the bar.

Before drawing any conclusions from your histogram, satisfy yourself that the process was operating normally during the time period being studied. If any unusual events affected the

Page 84: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-8

process during the time period of the histogram, your analysis of the histogram shape probably cannot be generalized to all time periods.

Analyze the meaning of your histogram’s shape. Histograms are limited in their use due to the random order in which samples are taken and lack of information about the state of control of the process. Because samples are gathered without regard to order, the time-dependent or time-related trends in the process are not captured. So, what may appear to be the central tendency of the data may be deceiving. With respect to process statistical control, the histogram gives no indication whether the process was operating at its best when the data was collected. This lack of information on process control may lead to incorrect conclusions being drawn and, hence, inappropriate decisions being made. Still, with these considerations in mind, the histogram's simplicity of construction and ease of use make it an invaluable tool in the elementary stages of data analysis.

Table for The Class Interval

The Number of Data The Class Number

< 50 5 ~ 7

50 ~ 100 6 ~ 10 4 ~ 9

100 ~ 200 7 ~ 12

> 250 8 ~ 17

> 500 10 ~ 20

15 ~ 20

Kaoru Ishikawa (Guide to Quality Control)

Donna C.S Summer (Quality 4th Ed)

EXAMPLE: 1 A.

1. Determine the range of the data by subtracting the smallest observed measurement from the largest and designate it as R.

Example:

Largest observed measurement = 1.1185 inches

Smallest observed measurement = 1.1030 inches

R = 1.1185 inches - 1.1030 inches =.0155 inch

2. Record the measurement unit (MU) used. This is usually controlled by the measuring instrument least count.

Example: MU = .0001 inch

3. Determine the number of classes and the class width. The number of classes, k, should be no lower than six and no higher than fifteen for practical purposes. Trial and error may be done to achieve the best distribution for analysis.

Example: k=8

4. Determine the class width (H) by dividing the range, R, by the preferred number of classes, k.

Example: R/k = .0155/8 = .0019375 inch

The class width selected should be an odd-numbered multiple of the measurement unit, MU. This value should be close to the H value:

MU = .0001 inch

Class width = .0019 inch or .0021 inch

5. Establish the class midpoints and class limits. The first class midpoint should be located near the largest observed measurement. If possible, it should also be a convenient increment. Always make the class widths equal in size, and express the class limits in terms which are one-half unit beyond the accuracy of the original measurement unit. This avoids plotting an observed measurement on a class limit.

Example: First class midpoint = 1.1185 inches,

and the class width is .0019 inch.

Therefore, limits would be 1.1185 + or - .0019/2.

Page 85: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-9

6. Determine the axes for the graph. The frequency scale on the vertical axis should slightly exceed the largest class frequency, and the measurement scale along the horizontal axis should be at regular intervals which are independent of the class width. (See example below steps.)

7. Draw the graph. Mark off the classes, and draw rectangles with heights corresponding to the measurement frequencies in that class.

8. Title the histogram. Give an overall title and identify each axis.

EXAMPLE: 1B.

The following example shows data collected from an experiment measuring pellet penetration depth from a pellet gun in inches and the corresponding histogram:

Penetration depth (inches) 2 3 3 3 3 4 4 4 5 5 6 6

Some important things to remember when constructing a histogram: Use intervals of equal length. Show the entire vertical axes beginning with zero. Do not break either axis. Keep a uniform scale across the axis. Center the histogram bars at the midpoint of the intervals (in this case, the penetration depth

intervals).

Page 86: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-10

SHAPE DISTRIBUTION DESCRIPTION & RECOMMANDATIONS:

1. NormalSymmetric, Moderate-Tailed Histogram

Note the classical bell-shaped, symmetric histogram with most of the frequency counts bunched in the middle and with the counts dying off out in the tails. From a physical science/engineering point of view, the normal distribution is that distribution which occurs most often in nature (due in part to the central limit theorem).

Recommended Next Step If the histogram indicates a symmetric, moderate tailed distribution, then the recommended next step is to do a normal probability plot to confirm approximate normality. If the normal probability plot is linear, then the normal distribution is a good model for the data

2. Symmetric, Non-Normal, Short-Tailed

Description of What Short-Tailed Means

For a symmetric distribution, the "body" of a distribution refers to the "center" of the distribution--commonly that region of the distribution where most of the probability resides--the "fat" part of the distribution. The "tail" of a distribution refers to the extreme regions of the distribution--both left and right. The "tail length" of a distribution is a term that indicates how fast these extremes approach zero.

For a short-tailed distribution, the tails approach zero very fast. Such distributions commonly have a truncated ("sawed-off") look. The classical short-tailed distribution is the uniform (rectangular) distribution in which the probability is constant over a given range and then drops to zero everywhere else--we would speak of this as having no tails, or extremely short tails.

For a moderate-tailed distribution, the tails decline to zero in a moderate fashion. The classical moderate-tailed distribution is the normal (Gaussian) distribution.

For a long-tailed distribution, the tails decline to zero very slowly--and hence one is apt to see probability a long way from the body of the distribution. The classical long-tailed distribution is the Cauchy distribution.

In terms of tail length, the histogram shown above would be characteristic of a "short-tailed" distribution.

The optimal (unbiased and most precise) estimator for location for the center of a distribution is heavily dependent on the tail length of the distribution. The common choice of taking N observations and using the calculated sample mean as the best estimate for the center of the distribution is a good choice for the normal distribution (moderate tailed), a poor choice for the uniform distribution (short tailed), and a horrible choice for the Cauchy distribution (long tailed). Although for the normal distribution the sample mean is as precise an estimator as we can get, for the uniform and Cauchy distributions, the sample mean is not the best estimator.

Page 87: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-11

For the uniform distribution, the midrange

midrange = (smallest + largest) / 2

is the best estimator of location. For a Cauchy distribution, the median is the best estimator of location.

Symmetric, Short-Tailed Histogram

Recommended Next Step If the histogram indicates a symmetric, short-tailed distribution, the recommended next step is to generate a uniform probability plot. If the uniform probability plot is linear, then the uniform distribution is an appropriate model for the data

3. Symmetric, Non-Normal, Long-Tailed

Description of Long-Tailed

The previous example contains a discussion of the distinction between short-tailed, moderate-tailed, and long-tailed distributions.

In terms of tail length, the histogram shown above would be characteristic of a "long-tailed" distribution

Recommended Next Step If the histogram indicates a symmetric, long tailed distribution, the recommended next step is to do a Cauchy probability plot. If the Cauchy probability plot is linear, then the Cauchy distribution is an appropriate model for the data. Alternatively, a Tukey Lambda PPCC plot may provide insight into a suitable distributional model for the data.

Symmetric, Long-Tailed Histogram

Page 88: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-12

4. Symmetric and Bimodal

Symmetric, Bimodal Histogram

Description of Bimodal

The mode of a distribution is that value which is most frequently occurring or has the largest probability of occurrence. The sample mode occurs at the peak of the histogram.

For many phenomena, it is quite common for the distribution of the response values to cluster around a single mode (unimodal) and then distribute themselves with lesser frequency out into the tails. The normal distribution is the classic example of a unimodal distribution.

The histogram shown above illustrates data from a bimodal (2 peak) distribution. The histogram serves as a tool for diagnosing problems such as bimodality. Questioning the underlying reason for distributional non-unimodality frequently leads to greater insight and improved deterministic modeling of the phenomenon under study. For example, for the data presented above, the bimodal histogram is caused by sinusoidality in the data.

Recommended Next Step If the histogram indicates a symmetric, bimodal distribution, the recommended next steps are to:

1. Do a run sequence plot or a scatter plot to check for sinusoidality. 2. Do a lag plot to check for sinusoidality. If the lag plot is elliptical, then the data are

sinusoidal. 3. If the data are sinusoidal, then a spectral plot is used to graphically estimate the

underlying sinusoidal frequency. 4. If the data are not sinusoidal, then a Tukey Lambda PPCC plot may determine the best-fit

symmetric distribution for the data. 5. The data may be fit with a mixture of two distributions. A common approach to this case is

to fit a mixture of 2 normal or lognormal distributions. Further discussion of fitting mixtures of distributions is beyond the scope of this Handbook.

5. Bimodal Mixture of 2 Normals

Histogram from Mixture of 2 Normal Distributions

Page 89: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-13

Discussion of Unimodal and Bimodal

The histogram shown above illustrates data from a bimodal (2 peak) distribution.

In contrast to the previous example, this example illustrates bimodality due not to an underlying deterministic model, but bimodality due to a mixture of probability models. In this case, each of the modes appears to have a rough bell-shaped component. One could easily imagine the above histogram being generated by a process consisting of two normal distributions with the same standard deviation but with two different locations (one centered at approximately 9.17 and the other centered at approximately 9.26). If this is the case, then the research challenge is to determine physically why there are two similar but separate sub-processes.

Recommended Next Steps

If the histogram indicates that the data might be appropriately fit with a mixture of two normal distributions, the recommended next step is:

Fit the normal mixture model using either least squares or maximum likelihood. The general normal mixing model is

where p is the mixing proportion (between 0 and 1) and and are normal probability density functions with location and scale parameters , , , and , respectively. That is, there are 5 parameters to estimate in the fit.

Whether maximum likelihood or least squares is used, the quality of the fit is sensitive to good starting values. For the mixture of two normals, the histogram can be used to provide initial estimates for the location and scale parameters of the two normal distributions.

6. Skewed (Non-Symmetric) Right

Right-Skewed Histogram

Discussion of Skewness

A symmetric distribution is one in which the 2 "halves" of the histogram appear as mirror-images of one another. A skewed (non-symmetric) distribution is a distribution in which there is no such mirror-imaging.

For skewed distributions, it is quite common to have one tail of the distribution considerably longer or drawn out relative to the other tail. A "skewed right" distribution is one in which the tail is on the right side. A "skewed left" distribution is one in which the tail is on the left side. The above histogram is for a distribution that is skewed right.

Skewed distributions bring a certain philosophical complexity to the very process of estimating a "typical value" for the distribution. To be specific, suppose that the analyst has a collection of 100 values randomly drawn from a distribution, and wishes to summarize these 100 observations by a "typical value". What does typical value mean? If the distribution is symmetric, the typical value is

Page 90: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-14

unambiguous-- it is a well-defined center of the distribution. For example, for a bell-shaped symmetric distribution, a center point is identical to that value at the peak of the distribution.

For a skewed distribution, however, there is no "center" in the usual sense of the word. Be that as it may, several "typical value" metrics are often used for skewed distributions. The first metric is the mode of the distribution. Unfortunately, for severely-skewed distributions, the mode may be at or near the left or right tail of the data and so it seems not to be a good representative of the center of the distribution. As a second choice, one could conceptually argue that the mean (the point on the horizontal axis where the distribution would balance) would serve well as the typical value. As a third choice, others may argue that the median (that value on the horizontal axis which has exactly 50% of the data to the left (and also to the right) would serve as a good typical value.

For symmetric distributions, the conceptual problem disappears because at the population level the mode, mean, and median are identical. For skewed distributions, however, these 3 metrics are markedly different. In practice, for skewed distributions the most commonly reported typical value is the mean; the next most common is the median; the least common is the mode. Because each of these 3 metrics reflects a different aspect of "centerness", it is recommended that the analyst report at least 2 (mean and median), and preferably all 3 (mean, median, and mode) in summarizing and characterizing a data set.

Some Causes for Skewed Data

Skewed data often occur due to lower or upper bounds on the data. That is, data that have a lower bound are often skewed right while data that have an upper bound are often skewed left. Skewness can also result from start-up effects. For example, in reliability applications some processes may have a large number of initial failures that could cause left skewness. On the other hand, a reliability process could have a long start-up period where failures are rare resulting in right-skewed data.

Data collected in scientific and engineering applications often have a lower bound of zero. For example, failure data must be non-negative. Many measurement processes generate only positive data. Time to occurrence and size are common measurements that cannot be less than zero.

Recommended Next Steps

If the histogram indicates a right-skewed data set, the recommended next steps are to:

1. Quantitatively summarize the data by computing and reporting the sample mean, the sample median, and the sample mode.

2. Determine the best-fit distribution (skewed-right) from the o Weibull family (for the maximum) o Gamma family o Chi-square family o Lognormal family o Power lognormal family

3. Consider a normalizing transformation such as the Box-Cox transformation.

7. Skewed (Non-Symmetric) Left

Skewed Left Histogram

Page 91: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-15

The issues for skewed left data are similar to those for skewed right data.

8. Symmetric with Outlier

Discussion of Outliers A symmetric distribution is one in which the 2 "halves" of the histogram appear as mirror-images of one another. The above example is symmetric with the exception of outlying data near Y = 4.5.

An outlier is a data point that comes from a distribution different (in location, scale, or distributional form) from the bulk of the data. In the real world, outliers have a range of causes, from as simple as

1. operator blunders 2. equipment failures 3. day-to-day effects 4. batch-to-batch differences 5. anomalous input conditions 6. warm-up effects

to more subtle causes such as 1. A change in settings of factors that (knowingly or unknowingly) affect the response. 2. Nature is trying to tell us something.

Symmetric Histogram with Outlier

Outliers Should be Investigated

All outliers should be taken seriously and should be investigated thoroughly for explanations. Automatic outlier-rejection schemes (such as throw out all data beyond 4 sample standard deviations from the sample mean) are particularly dangerous.

The classic case of automatic outlier rejection becoming automatic information rejection was the South Pole ozone depletion problem. Ozone depletion over the South Pole would have been detected years earlier except for the fact that the satellite data recording the low ozone readings had outlier-rejection code that automatically screened out the "outliers" (that is, the low ozone readings) before the analysis was conducted. Such inadvertent (and incorrect) purging went on for years. It was not until ground-based South Pole readings started detecting low ozone readings that someone decided to double-check as to why the satellite had not picked up this fact--it had, but it had gotten thrown out!

The best attitude is that outliers are our "friends", outliers are trying to tell us something, and we should not stop until we are comfortable in the explanation for each outlier.

Recommended Next Steps If the histogram shows the presence of outliers, the recommended next steps are:

1. Graphically check for outliers (in the commonly encountered normal case) by generating a box plot. In general, box plots are a much better graphical tool for detecting outliers than are histograms.

2. Quantitatively check for outliers (in the commonly encountered normal case) by carrying out Grubbs test which indicates how many sample standard deviations away from the sample mean are the data in question. Large values indicate outliers.

Page 92: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-16

Interpretations

When combined with the concept of the normal curve and the knowledge of a particular process, the histogram becomes an effective, practical working tool in the early stages of data analysis. A histogram may be interpreted by asking three questions:

1. Is the process performing within specification limits? 2. Does the process seem to exhibit wide variation? 3. If action needs to be taken on the process, what action is appropriate?

The answer to these three questions lies in analyzing three characteristics of the histogram.

1. How well is the histogram centered? The centering of the data provides information on the process aim about some mean or nominal value.

2. How wide is the histogram? Looking at histogram width defines the variability of the process about the aim.

3. What is the shape of the histogram? Remember that the data is expected to form a normal or bell-shaped curve. Any significant change or anomaly usually indicates that there is something going on in the process which is causing the quality problem.

EXAMPLE : 2

In a new plant, time cards need to be submitted to the payroll office within 30 minutes after the close of the last shift on Friday. After twenty weeks of operation, the payroll office complaint to the plant manager that this process is not being adhered to and as a result the payroll checks are not being processed on time and workers are starting to complain. The requirement is = 30 minutes.

The sample data that gathers by in payroll office as follow as:

30 30 * 27 * 33

30 30 32 28

28 30 32 29

29 30 31 29

31 30 31 30

R = XH – XL = 33 – 27 = 6

27 = 1

28 = 2

29 = 3

30 = 8

31 = 3

32 = 2

33 = 1

TOTAL = 20

0

1

2

3

4

5

6

7

8

9

33 32 31 30 29 28 27

Page 93: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-17

EXAMPLE : 3

Below is the data that being taken from the Pressing machine that produce the ceramic part for IC ROM before it’s sent to funnel burner .

9.85 10.00 9.95 10.00 10.00 10.00 9.95 10.15 9.85 10.05

9.90 10.05 9.85 10.10 10.00 10.15 9.90 10.00 *9.80 10.00

9.90 10.05 9.90 10.10 9.95 10.00 9.90 9.90 9.90 10.00

9.90 10.10 9.90 *10.20 9.95 10.00 9.85 9.95 9.95 10.10

9.95 10.15 9.95 10.00 9.90 10.00 10.00 10.10 9.95 10.15

R = XH – XL = 10.20 – 9.80 = 0.40

9.80 = 1

9.85 = 4

9.90 = 10

9.95 = 9

10.00 = 13

10.05 = 3

10.10 = 5

10.15 = 4

10.20 = 1

TOTAL = 50

H = (R/i) + 1

i = 0.07 (0.40/0.07) + 1 = 0.6 + 1 2

i = 0.05 (0.40/0.05) + 1 = 0.8 + 1 2

i = 0.03 (0.40/0.03) + 1 = 1.4 + 1 2

then : 0.05 ÷ 2 = 0.025, so :

Range Center

9.785 – 9.835 9.81 = 1

9.835 – 9.885 9.66 = 4

9.885 – 9.935 9.91 = 10

9.935 – 9.985 9.96 = 9

9.985 – 10.035 10.01 = 13

10.035 – 10.085 10.06 = 3

10.085 – 10.135 10.11 = 5

10.135 – 10.185 10.16 = 4

10.185 – 10.235 10.21 = 1

TOTAL =50

Choose the higher one

Page 94: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-18

0

2

4

6

8

10

12

14

9.80 9.85 9.90 9.95 10.00 10.05 10.10 10.15 10.20

Fig. A

0

2

4

6

8

10

12

14

9.81 9.66 9.91 9.96 10.01 10.06 10.11 10.16 10.21

Fig. B

EXAMPLE : 4

Quality improvement team at an emergency fire fighting squad was interested in studying the response time to emergency service calls (in particular), the time interval between the customer call-in and the arrival of a service crew at the scene). Shows below the table of 40 actual response time measurement (in minutes) collected by the team. The table contain a great deal of information about the variation in response times, it is difficult to extract the information from a list of minutes.

61 48 62 62 44 52 53 * 84 53 71

* 39 62 68 50 58 54 66 53 53 77

60 59 71 51 76 50 57 59 55 59

59 74 67 62 64 68 55 46 63 64

XH = 84 ; XL = 39 R = XH – XL = 84 -39 = 45

x = R/ = 45/ =3 x = 15

=5 x = 9

=7 x = 7

=9 x = 5

= 5

Range Center

37 ~ 41 39 = 1

42 ~ 46 44 = 2

47 ~ 51 49 = 4

52 ~ 56 54 = 8

57 ~ 61 59 = 8

62 ~ 66 64 = 8

67 ~ 71 69 = 5

72 ~ 76 74 = 2

77 ~ 81 79 = 1

82 ~ 86 84 = 1

TOTAL = 40

3937 41

5

Page 95: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-19

0

1

2

3

4

5

6

7

8

9

39 44 49 54 59 64 69 74 79 84

But for simplest manner due to the time we need to consider to rounding its, then as follow:

10 20 30 40 50 60 70 80 90 1 hrs 1 ½ hrs

So we choose the lowest one is 40 and the highest one is 90.

Range Center

38 ~ 42 40 = 1

43 ~ 47 45 = 2

48 ~ 52 50 = 5

53 ~ 57 55 = 8

58 ~ 62 60 = 11

63 ~ 67 65 = 5

68 ~ 72 70 = 4

73 ~ 77 75 = 3

78 ~ 82 80 = 0

83 ~ 87 85 = 1

TOTAL = 40

0

2

4

6

8

10

12

40 45 50 55 60 65 70 75 80 85

Page 96: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-20

EXAMPLE : 5A manufacturing engineering department started working on a process check plating/chroming for sheet metal attached product to the locking system for safe deposit box product since they have been activating the new design of plat to avoid the corrosion. To response to customer issue, the quality engineer involved in studying the thickness of the chroming plating as below table:

0.625 0.625 0.626 0.626 0.624 0.625 0.625 0.628 0.627 0.627

0.624 0.624 0.623 0.626 0.624 0.624 0.626 0.625 0.625 0.627

0.622 0.628 0.625 0.627 0.623 0.628 0.625 0.626 0.626 0.630

0.624 0.627 0.623 0.626 * 0.620 0.628 0.623 0.625 0.624 0.627

0.621 0.626 0.621 0.625 0.622 0.626 0.625 0.625 0.624 0.627

0.628 0.627 0.626 0.626 0.625 0.628 0.626 0.625 0.627 0.627

0.624 0.625 0.627 0.626 0.625 0.628 0.624 0.625 0.626 0.627

0.624 0.628 0.625 0.626 0.625 0.627 0.626 0.630 0.626 0.627

0.627 0.625 0.628 0.631 0.626 0.630 0.625 0.628 0.627 0.627

0.625 0.627 0.626 0.630 0.628 0.631 0.626 0.628 0.627 0.627

0.625 0.630 0.624 0.628 0.626 0.629 0.626 0.628 0.626 0.627

0.630 0.630 0.628 0.628 0.627 0.631 0.625 0.628 0.627 0.627

0.627 * 0.632 0.626 * 0.632 0.628 0.628 0.627 0.631 0.626 0.630

0.626 0.630 0.626 0.628 0.625 0.631 0.626 * 0.632 0.627 0.631

0.628 * 0.632 0.627 0.631 0.626 0.630 0.625 0.628 0.626 0.628

VH = 0.632 ; VL = 0.620 R = VH – VL = 0.632 – 0.620 = 0.012

h = 0.012 + 1 = 0.003 h = 5 i

= 0.005 h 3

= 0.007 h 3

No Lowest ~ Highest MidPoint

1 0.6185 ~ 0.6215 0.620 = 3

2 0.6215 ~ 0.6245 0.623 = 18

3 0.6245 ~ 0.6275 0.626

= 82

4 0.6275 ~ 0.6305 0.629 = 36

5 0.6305 ~ 06305 0.632 = 11

TOTAL = 150

0

10

20

30

40

50

60

70

80

90

0.6

20

0.6

23

0.6

26

0.6

29

0.6

32

Page 97: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-21

N = 150 = N 150 > 12 13

No Scale

1 0.620 = 1

2 0.621 = 2

3 0.622 = 2

4 0.623 = 4

5 0.624 = 12

6 0.625 = 26

7 0.626 = 30

8 0.627 = 28

9 0.628 = 23

10 0.629 = 1

11 0.630 = 11

12 0.631 = 7

13 0.632 = 4

TOTAL = 150

0

5

10

15

20

25

30

35

0.6

20

0.6

21

0.6

22

0.6

23

0.6

24

0.6

25

0.6

26

0.6

27

0.6

28

0.6

29

0.6

30

0.6

31

0.6

32

EXAMPLE : 6Below is the 100 metal thickness data taken about the optic component to determine the data distribution.

3.56 3.46 3.48 3.50 3.42 3.43 3.52 3.49 3.44 3.50

3.48 3.56 3.50 3.52 3.47 3.48 3.46 3.50 3.56 3.38

3.41 3.37 3.47 3.49 3.45 3.44 3.50 3.49 3.46 3.46

3.55 3.52 3.44 3.50 3.45 3.44 3.48 3.46 3.52 3.46

3.48 3.48 3.32 3.40 3.52 3.34 3.46 3.43 * 3.30 3.46

3.59 3.63 3.59 3.47 3.38 3.52 3.45 3.48 3.31 3.46

3.40 3.54 3.46 3.51 3.48 3.50 * 3.68 3.60 3.46 3.52

3.48 3.50 3.56 3.50 3.52 3.46 3.48 3.46 3.52 3.56

3.52 3.48 3.46 3.45 3.46 3.54 3.54 3.48 3.49 3.41

3.41 3.45 3.34 3.44 3.47 3.47 3.41 3.48 3.54 3.47

XH = 3.68 ; XL = 3.30 R = 3.68 – 3.30 = 0.38

Page 98: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-22

x = R/ = 0.38/ =0.03 x = 13 =0.05 x = 7 =0.07 x = 5 =0.09 x = 4

Range Center

No Lowest~Highest MidPoint

1 3.275 ~ 3.325 3.30 = 3

2 3.325 ~ 3.375 3.35 = 3

3 3.375 ~ 3.425 3.40 = 9

4 3.425 ~ 3.475 3.45 = 32

5 3.475 ~ 3.525 3.50 = 38

6 3.525 ~ 3.575 3.55 = 10

7 3.575 ~ 3.625 3.60 = 3

8 3.625 ~ 3.675 3.65 = 1

9 3.675 ~ 3.725 3.70 = 1

TOTAL = 150

0

5

10

15

20

25

30

35

40

3.3

0

3.3

5

3.4

0

3.4

5

3.5

0

3.5

5

3.6

0

3.6

5

3.7

0

EXAMPLE : 7

Automated data acquisition systems generate timely data about the product produced and the process producing the silicon wafer thickness. For this reason, the company uses an integrated system of automated statistical process control programming, data collection devices, and programmable logic

3.303.275 3.325

0.05

Page 99: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-23

controllers (PLCS) to collect statistical information about silicon wafer production. Utilizing the system relieves the process engineers form the burden crunching, freeing time for critical analysis of the data. The data as below:

0.2500 0.2500 0.2520 0.2490 0.2500 0.2510

0.2510 0.2510 0.2480 0.2490 0.2510 0.2500

0.2510 0.2500 0.2490 0.2510 0.2500 0.2520

0.2490 0.2500 0.2470 0.2520 0.2490 0.2510

0.2500 0.2500 0.2480 0.2500 0.2500 0.2530

0.2510 0.2510 0.2500 0.2490 0.2500 0.2530

0.2510 0.2490 0.2490 0.2500 0.2520 0.2540

0.2500 0.2490 0.2470 0.2490 0.2480 0.2510

0.2500 0.2470 0.2500 0.2500 0.2480 0.2520

0.2480 0.2470 0.2470 0.2510 0.2520 0.2510

N = 60 = N 60 8 8

No MidPoint

1 0.2470 = 5

2 0.2480 = 5

3 0.2490 = 10

4 0.2500 = 18

5 0.2510 = 13

6 0.2520 = 6

7 0.2530 = 2

8 0.2540 = 1

TOTAL = 60

0

2

4

6

8

10

12

14

16

18

20

0.2

47

0.2

48

0.2

49

0.2

50

0.2

51

0.2

52

0.2

53

0.2

54

Page 100: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-24

EXERCISE:

1. The following table lists the number of failures in 120 samples of a certain electronic component tested for 100 hours each.

10 13 13 11 13 8 16 18 13 15

12 13 14 12 13 15 14 13 11 9

13 15 11 13 14 17 10 12 5 15

11 14 12 10 13 11 13 8 14 18

14 13 14 11 14 12 13 11 13 16

13 14 13 14 13 12 14 12 11 15

13 14 11 14 13 10 9 12 11 15

14 11 12 13 14 13 12 13 17 7

12 13 14 13 12 17 13 11 15 16

10 4 8 12 11 7 9 10 6 9

11 15 14 16 17 12 13 16 16 15

15 16 16 13 14 16 6 13 14 16

a. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints, and frequencies.

b. Contruct a frequency histogram c. Calculate the mean and standard deviation of the distribution

2. The following table lists of 100 cerial box weights:

1.12 0.81 1.30 1.22 1.00 1.46 1.19 0.72 1.32 1.36

1.81 1.81 1.44 1.69 1.90 1.52 1.66 1.76 1.61 1.55

1.02 2.03 1.67 1.55 1.62 1.79 1.62 1.54 1.48 1.73

1.88 1.43 1.54 1.29 1.37 1.65 1.77 1.43 1.71 1.66

1.94 1.67 1.43 1.61 1.55 1.73 1.42 1.66 1.79 1.77

2.17 1.55 1.12 1.63 1.43 1.63 1.36 1.58 1.79 1.41

1.77 1.44 1.61 1.43 1.61 1.74 1.67 1.72 1.81 1.65

0.82 1.52 1.33 1.57 1.98 1.85 1.63 1.44 1.87 1.69

1.25 1.43 1.55 1.68 1.77 1.68 1.54 1.76 1.64 1.57

1.65 2.15 1.90 1.18 1.60 1.45 1.63 1.85 1.67 1.36

a. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints, and frequencies.

b. Contruct a frequency histogram c. Calculate the mean and standard deviation of the distribution

3. The tensile strenght of 100 castings was tested, and the results are listed in the table below

15.1 11.0 5.3 13.3 13.0 10.9 9.4 4.2 12.3 13.0

10.6 6.0 10.6 7.0 10.8 8.0 10.8 8.0 10.9 6.7

9.4 4.8 8.9 9.0 10.1 11.0 10.5 5.7 7.6 16.6

6.0 10.7 7.8 8.9 9.2 12.6 16.7 7.9 9.5 15.8

8.6 6.6 16.7 7.1 11.5 15.9 9.5 15.8 8.4 14.2

12.3 13.9 9.3 13.7 7.0 10.8 8.7 7.6 6.5 15.4

14.9 9.3 13.0 10.0 10.1 11.2 12.0 10.9 9.2 12.6

16.1 11.5 15.2 12.6 16.4 14.0 10.1 11.2 12.6 16.5

9.6 6.3 16.8 6.1 10.5 15.5 11.5 9.8 11.4 14.2

12.1 13.1 11.3 10.7 8.2 10.8 8.7 9.6 7.1 13.7

Page 101: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-25

a. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints, and frequencies.

b. Contruct a frequency histogram c. Calculate the mean and standard deviation of the distribution

4. Roger and Bill are trying to settle an argument. Roger says that averages from a deck of cards will form a normal curve when plotted. Bill says they won’t. They decide to try the following exercise involving averages. (An ace is woth 1 point, a jack 11, a queen 12, and a king13)

No Average No Average

1 11 13 10 12 7 10.6 26 8 10 1 9 10 7.6

2 1 7 12 9 3 6.4 27 9 13 2 2 2 5.6

3 9 5 12 1 11 7.6 28 12 4 12 3 13 8.8

4 11 5 7 9 12 8.8 29 12 4 7 6 9 7.6

5 7 12 13 7 4 8.6 30 1 12 3 12 11 7.8

6 11 9 5 1 13 7.8 31 12 3 10 11 6 8.4

7 1 4 13 12 13 8.6 32 3 5 10 2 7 5.4

8 13 3 2 6 12 7.2 33 9 1 2 3 11 5.2

9 2 4 1 10 13 6.0 34 6 8 6 13 9 8.4

10 4 5 12 1 9 6.2 35 2 12 5 10 4 6.6

11 2 5 7 7 11 6.4 36 6 4 8 9 12 7.8

12 6 9 8 2 12 7.4 37 9 13 3 10 1 7.2

13 2 3 6 11 11 6.6 38 2 1 13 7 5 5.6

14 2 6 9 11 13 8.2 39 10 11 5 12 13 10.2

15 6 8 8 9 1 6.4 40 13 2 8 2 11 7.2

16 3 4 12 1 6 5.2 41 2 10 5 4 11 6.4

17 8 1 8 6 10 6.6 42 10 4 12 7 11 8.8

18 5 7 6 8 8 6.8 43 13 13 7 1 10 8.8

19 2 5 4 10 1 4.4 44 9 10 7 11 11 9.6

20 5 7 12 7 8 7.8 45 6 7 8 7 4 6.4

21 9 1 3 6 12 6.2 46 1 4 12 11 13 8.2

22 1 13 9 3 6 6.4 47 9 11 8 1 11 8.0

23 4 5 13 5 7 6.8 48 8 13 10 13 4 9.6

24 3 7 9 8 10 7.4 49 12 11 11 2 3 7.8

25 1 7 6 6 1 4.2 50 2 12 5 11 9 7.8

a. Contruct a frequency histogram b. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints,

and frequencies if they calculate the average for the five cards

5.The ABC Company is planning to analyze the average weekly wage distribution of its 58 employees during fiscal year 2003. The 58 weekly wages are available as raw data corresponding to the alphabetic order of the employees’ names as below:

241 253 312 258 264 265

316 242 257 251 282 305

298 276 284 304 285 307

263 301 262 272 271 265

249 229 253 285 267 250

288 248 276 280 252 258

262 314 241 257 250 275

275 301 283 249 288 275

281 276 289 228 275

170 289 262 282 260

Page 102: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-26

a. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints, and frequencies.

b. Contruct a frequency histogram c. Calculate the mean and standard deviation of the distribution

6. Given the ordered arrays in the accompanying table dealing with the lenghts of life (in hours) of a sample of forty 100-watt light bulbs produced by manaufacturer A and a sample of forty 100-watt light bulbs produced by manufacturer B.

MANUFACTURER A MANUFACTURER B

684 697 720 773 821 819 836 888 897 903

831 835 848 852 852 907 912 918 942 943

859 860 868 870 876 952 959 962 986 992

893 899 905 909 911 994 1004 1005 1007 1015

922 924 926 926 938 1016 1018 1020 1022 1034

939 943 946 954 971 1038 1072 1077 1077 1082

972 977 984 1005 1041 1096 1100 1113 1113 1116

1016 1041 1052 1080 1093 1153 1154 1174 1188 1230

a. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints, and frequencies.

b. Contruct a frequency histogram c. Calculate the mean and standard deviation of the distribution

7. The following data represent the amount of soft drink filed in a sample of 50 consecutive 2-liter bottles. The results, listed horizontally in the order of being filled as below:

a. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints, and frequencies.

b. Contruct a frequency histogram c. Calculate the mean and standard deviation of the distribution

2.109 2.086 2.066 2.075 2.065 2.057 2.052 2.044 2.036 2.038

2.031 2.029 2.025 2.029 2.023 2.020 2.015 2.014 2.013 2.014

2.012 2.012 2.012 2.010 2.005 2.003 1.999 1.996 1.997 1.992

1.994 1.986 1.984 1.981 1.973 1.975 1.971 1.969 1.966 1.967

1.963 1.957 1.951 1.951 1.947 1.941 1.941 1.938 1.908 1.894

8. In his last 70 games a professional basketball player made the follwing scores:

10 17 9 17 18 20 16

7 17 19 13 15 14 13

12 13 15 14 13 10 14

11 15 14 11 15 15 16

9 18 15 12 14 13 14

13 14 16 15 16 15 15

14 15 15 16 13 12 16

10 16 14 13 16 14 15

6 15 13 16 15 16 16

12 14 16 15 16 13 15

a. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints, and frequencies.

b. Contruct a frequency histogram c. Calculate the mean and standard deviation of the distribution

Page 103: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-27

9. A company that filels bottles of shampoo try to maintain a specific weight of the product. The table gives the weight of 110 bottles that were checked at random intervals. Make a tally of these weights and construct a frequency histogram (weight in kg)

6.00 5.98 6.01 6.01 5.97 5.99 5.98 6.01 5.99 5.98 5.96

5.98 5.99 5.99 6.03 5.99 6.01 5.98 5.99 5.97 6.01 5.98

5.97 6.01 6.00 5.96 6.00 5.97 5.95 5.99 5.99 6.01 6.00

6.01 6.03 6.01 5.99 5.99 6.02 6.00 5.98 6.01 5.98 5.99

6.00 5.98 6.05 6.00 6.00 5.98 5.99 6.00 5.97 6.00 6.00

6.00 5.98 6.00 5.94 5.99 6.02 6.00 5.98 6.02 6.01 6.00

5.97 6.01 6.04 6.02 6.01 5.97 5.99 6.02 5.99 6.02 5.99

6.02 5.99 6.01 5.98 5.99 6.00 6.02 5.99 6.02 5.95 6.02

5.96 5.99 6.00 6.00 6.01 5.99 5.96 6.01 6.00 6.01 5.98

6.00 5.99 5.98 5.99 6.03 5.99 6.02 5.98 6.02 6.02 5.97

a. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints, and frequencies.

b. Contruct a frequency histogram c. Calculate the mean and standard deviation of the distribution

10. Listed next are 125 readings obtained in a hospital by motion-and-time study analyst who took 5 readings each day for 25 days.

DAY DURATION OF OPERATION TIME

(MIN)

1 1.90 1.93 1.95 2.05 2.20

2 1.76 1.81 1.81 1.83 2.01

3 1.80 1.87 1.95 1.97 2.07

4 1.77 1.83 1.87 1.90 1.93

5 1.93 1.95 2.03 2.05 2.14

6 1.76 1.88 1.95 1.97 2.00

7 1.87 2.00 2.00 2.03 2.10

8 1.91 1.92 1.94 1.97 2.05

9 1.90 1.91 1.95 2.01 2.05

10 1.79 1.91 1.93 1.94 2.10

11 1.90 1.97 2.00 2.06 2.28

12 1.80 1.82 1.89 1.91 1.99

13 1.75 1.83 1.92 1.95 2.04

14 1.87 1.90 1.98 2.00 2.08

15 1.90 1.95 1.95 1.97 2.03

16 1.82 1.99 2.01 2.06 2.06

17 1.90 1.95 1.95 2.00 2.10

18 1.81 1.90 1.94 1.97 1.99

19 1.87 1.89 1.98 2.01 2.15

20 1.72 1.78 1.96 2.00 2.05

21 1.87 1.89 1.91 1.91 2.00

22 1.76 1.80 1.91 2.06 2.12

23 1.95 1.96 1.97 2.00 2.00

24 1.82 1.94 1.97 1.99 2.00

25 1.85 1.90 1.90 1.92 1.92

a. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints, and frequencies.

b. Contruct a frequency histogram c. Calculate the mean and standard deviation of the distribution

Page 104: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-28

11. The realtive strenght of 150 silver solder welds are tested, and the results are given in the follwing table. Tally these figures and arrange them in afrequency distribution.

1.5 1.2 3.1 1.3 0.7 1.3

0.1 2.9 1.0 1.3 2.6 1.7

0.3 0.7 2.4 1.5 0.7 2.1

3.5 1.1 0.7 0.5 1.6 1.4

1.7 3.2 3.0 1.7 2.8 2.2

1.8 2.3 3.3 3.1 3.3 2.9

2.2 1.2 1.3 1.4 2.3 2.5

3.1 2.1 3.5 1.4 2.8 2.8

1.5 1.9 2.0 3.0 0.9 3.1

1.9 1.7 1.5 3.0 2.6 1.0

2.9 1.8 1.4 1.4 3.3 2.4

1.8 2.1 1.6 0.9 2.1 1.5

0.9 2.9 2.5 1.6 1.2 2.4

3.4 1.3 1.7 2.6 1.1 0.8

1.0 1.5 2.2 3.0 2.0 1.8

2.9 2.5 2.0 3.0 1.5 1.3

2.2 1.0 1.7 3.1 2.7 2.3

0.6 2.0 1.4 3.3 2.2 2.9

1.6 2.3 3.3 2.0 1.6 2.7

1.9 2.1 3.4 1.5 0.8 2.2

1.8 2.4 1.2 3.7 1.3 2.1

2.9 3.0 2.1 1.8 1.1 1.4

2.8 1.8 1.8 2.4 2.3 2.2

2.1 1.2 1.4 1.6 2.4 2.1

2.0 1.1 3.8 1.3 1.3 1.0

a. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints, and frequencies.

b. Contruct a frequency histogram c. Calculate the mean and standard deviation of the distribution

12. The Get-Well Hospital has completed a quality improvement project on the time admit patient using histogram. Listed below the data:

10 17 9 17 18 20 16

7 17 19 13 15 14 13

12 13 15 14 13 10 14

11 15 14 11 15 15 16

9 18 15 12 14 13 14

13 14 16 15 16 15 15

14 15 15 16 13 12 16

10 16 14 13 16 14 15

6 15 13 16 15 16 16

12 14 16 15 16 13 15

a. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints, and frequencies.

b. Contruct a frequency histogram c. Calculate the mean and standard deviation of the distribution

Page 105: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-29

13. Thickness measurements on pieces of silicon (mmX0.001)

790 1170 970 940 1050 1020 1070 790

1340 710 1010 770 1020 1260 870 1400

1530 1180 1440 1190 1250 940 1380 1320

1190 750 1280 1140 850 600 1020 1230

1010 1040 1050 1240 1040 840 1120 1320

1160 1100 1190 820 1050 1060 880 1100

1260 1450 930 1040 1260 1210 1190 1350

1240 1490 1490 1310 1100 1080 1200 880

820 980 1620 1260 760 1050 1370 950

1220 1300 1330 1590 1310 830 1270 1290

1000 1100 1160 1180 1010 1410 1070 1250

1040 1290 1010 1440 1240 1150 1360 1120

980 1490 1080 1090 1350 1360 1100 1470

1290 990 790 720 1010 1150 1160 850

1360 1560 980 970 1270 510 960 1390

1070 840 870 1380 1320 1510 1550 1030

1170 920 1290 1120 1050 1250 960 1550

1050 1060 970 1520 940 800 1000 1110

1430 1390 1310 1000 1030 1530 1380 1130

1110 950 1220 1160 970 940 880 1270

750 1010 1070 1210 1150 1230 1380 1620

1760 1400 1400 1200 1190 970 1320 1200

1460 1060 1140 1080 1210 1290 1130 1050

1230 1450 1150 1490 980 1160 1520 1160

1160 1700 1520 1220 1680 900 1030 850

a. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints, and frequencies.

b. Contruct a frequency histogram c. Calculate the mean and standard deviation of the distribution

14. Fill weight of bottels in gram

352 351 354 351 351 350

351 344 346 353 348 352

351 349 351 346 353 348

349 345 350 351 351 352

351 350 350 348 349 349

353 352 352 356 351 346

348 353 351 352 350 352

350 350 346 348 347 350

344 351 347 350 346 349

349 342 346 351 347 352

353 352 349 348 347

348 351 352 354 351

348 346 356 348 350

352 350 351 352 347

346 351 352 348 351

353 346 356 348 348

350 352 353 352 352

Page 106: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-30

a. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints, and frequencies.

b. Contruct a frequency histogram c. Calculate the mean and standard deviation of the distribution.

15. Thickness of Cardboard Sheets

0.52 0.55 0.49 0.51 0.52 0.50 0.52 0.48 0.53 0.51

0.53 0.50 0.51 0.51 0.50 0.45 0.51 0.50 0.52 0.44

0.52 0.51 0.55 0.50 0.52 0.52 0.49 0.55 0.52 0.53

0.42 0.45 0.43 0.42 0.46 0.56 0.51 0.54 0.56 0.52

0.48 0.50 0.47 0.51 0.51 0.49 0.53 0.51 0.52 0.51

0.49 0.52 0.51 0.48 0.50 0.52 0.48 0.47 0.50 0.49

0.54 0.48 0.51 0.48 0.49 0.55 0.46 0.48 0.53 0.50

0.43 0.46 0.53 0.48 0.49 0.51 0.50 0.53 0.50 0.49

0.51 0.52 0.48 0.53 0.54 0.50 0.47 0.49 0.52 0.51

0.53 0.48 0.49 0.51 0.48 0.48 0.52 0.47 0.46 0.47

0.51 0.46 0.51 0.55 0.50 0.52 0.49 0.50 0.48 0.50

0.50 0.54 0.52 0.51 0.52 0.46 0.52 0.48 0.51 0.52

0.46 0.48 0.53 0.51 0.51 0.49 0.55 0.52 0.50 0.49

0.52 0.49 0.52 0.47 0.48 0.52 0.51 0.53 0.47 0.48

0.49 0.51 0.50 0.51 0.52 0.50 0.48 0.52 0.49 0.48

a. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints, and frequencies.

b. Contruct a frequency histogram c. Calculate the mean and standard deviation of the distribution.

ADDITIONAL EXERCISE

1. Below the listed of Curing Time Test Results

Order curetime defects Order curetime defects1 31.6583 0 51 40.53732 3 2 29.7833 0 52 41.69992 3 3 31.8791 0 53 38.01712 2 4 33.9125 0 54 42.23068 4 5 34.4643 0 55 40.16485 2 6 25.1848 0 56 38.35171 2 7 37.76689 1 57 44.17493 4 8 39.21143 2 58 37.32931 1 9 41.34268 3 59 41.04428 3 10 39.54590 2 60 38.63444 2 11 29.5571 0 61 34.5628 0 12 32.5735 0 62 28.2506 1 13 29.4731 0 63 32.5956 0 14 25.3784 1 64 25.3439 2 15 25.0438 1 65 29.2058 0 16 24.0035 2 66 32.0702 0 17 25.4671 1 67 30.6983 0 18 34.8516 0 68 40.30540 3 19 30.1915 0 69 35.55970 0 20 31.6222 0 70 39.98265 2 21 46.25184 5 71 39.70007 2 22 34.71356 0 72 33.95910 0 23 41.41277 3 73 38.77365 1 24 44.63319 4 74 35.69885 0 25 35.44750 0 75 38.43070 2

Page 107: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-31

26 38.83289 2 76 40.05451 3 27 33.0886 0 77 43.13634 4 28 31.6349 0 78 44.31927 5 29 34.55143 0 79 39.84285 2 30 33.8633 0 80 39.12542 2 31 35.18869 0 81 39.00292 2 32 42.31515 3 82 34.9124 0 33 43.43549 4 83 33.9059 0 34 37.36371 1 84 28.2279 0 35 38.85718 2 85 32.4671 0 36 39.25132 2 86 28.8737 1 37 37.05298 1 87 34.3862 0 38 42.47056 4 88 33.9296 0 39 35.90282 0 89 33.0424 0 40 38.21905 2 90 28.4006 1 41 38.57292 2 91 32.5994 0 42 39.06772 2 92 30.7381 0 43 32.2209 0 93 31.7863 0 44 33.202 0 94 34.0398 0 45 27.0305 1 95 35.7598 0 46 33.6397 0 96 42.37100 3 47 26.6306 2 97 30.206 0 48 42.79176 4 98 34.5604 0 49 38.38454 2 99 27.93 1 50 37.89885 1 100 30.8174 0

Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints, and frequencies. Compare your graphic to the graphic above.

2. The histogram is demonstrated in the heat flow meter data case study Heat Flow Meter Calibration and Stability

GenerationThis data set was collected by Bob Zarr of NIST in January, 1990 from a heat flow meter calibration and stability analysis. The response variable is a calibration factor.

The motivation for studying this data set is to illustrate a well-behaved process where the underlying assumptions hold and the process is in statistical control.

Resulting Data

The following are the data used for this case study.

Page 108: Quality Course 2

Prepared by Haery Sihombing @ IP.

Sihmo\bi E

Histogram-32

9.206343 9.269989 9.264487 9.231304 9.265777

9.299992 9.226585 9.244242 9.240768 9.299047

9.277895 9.258556 9.277542 9.260506 9.244814

9.305795 9.286184 9.310506 9.274355 9.287205

9.275351 9.320067 9.261594 9.292376 9.300566

9.288729 9.327973 9.259791 9.271170 9.256621

9.287239 9.262963 9.253089 9.267018 9.271318

9.260973 9.248181 9.245735 9.308838 9.275154

9.303111 9.238644 9.284058 9.264153 9.281834

9.275674 9.225073 9.251122 9.278822 9.253158

9.272561 9.220878 9.275385 9.255244 9.269024

9.288454 9.271318 9.254619 9.229221 9.282077

9.255672 9.252072 9.279526 9.253158 9.277507

9.252141 9.281186 9.275065 9.256292 9.284910

9.297670 9.270624 9.261952 9.262602 9.239840

9.266534 9.294771 9.275351 9.219793 9.268344

9.256689 9.301821 9.252433 9.258452 9.247778

9.277542 9.278849 9.230263 9.267987 9.225039

9.248205 9.236680 9.255150 9.267987 9.230750

9.252107 9.233988 9.268780 9.248903 9.270024

9.276345 9.244687 9.290389 9.235153 9.265095

9.278694 9.221601 9.274161 9.242933 9.284308

9.267144 9.207325 9.255707 9.253453 9.280697

9.246132 9.258776 9.261663 9.262671 9.263032

9.238479 9.275708 9.250455 9.242536 9.291851

9.269058 9.268955 9.261952 9.260803 9.252072

9.248239 9.257269 9.264041 9.259825 9.244031

9.257439 9.264979 9.264509 9.253123 9.283269

9.268481 9.295500 9.242114 9.240803 9.196848

9.288454 9.292883 9.239674 9.238712 9.231372

9.258452 9.264188 9.221553 9.263676 9.232963

9.286130 9.280731 9.241935 9.243002 9.234956

9.251479 9.267336 9.215265 9.246826 9.216746

9.257405 9.300566 9.285930 9.252107 9.274107

9.268343 9.253089 9.271559 9.261663 9.273776

9.291302 9.261376 9.266046 9.247311

9.219460 9.238409 9.285299 9.306055

9.270386 9.225073 9.268989 9.237646

9.218808 9.235526 9.267987 9.248937

9.241185 9.239510 9.246166 9.256689

a. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints, and frequencies.

b. Contruct a frequency histogram c. Calculate the mean and standard deviation of the distribution

Page 109: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Scatter-1

SCATTER PLOT

Also called: scatter plot, X–Y graph

Scatter plot diagrams are used to evaluate the correlation or cause-effect relationship (if any) between two variables (e.g., speed and gas consumption in a vehicle). The scatter diagram graphs pairs of numerical data, with one variable on each axis, to look for a relationship between them. If the variables are correlated, the points will fall along a line or curve. The better the correlation, the tighter the points will hug the line.

The simplest form of scatter diagram consists of plotting bivariate data to depict the relationship between two variables. When analyze the processes, the relationship between a controllable variable and desired quality characteristic is frequently of importance. Knowing this relationship may help us decide how to set a controllable variable to achieve a desired level for the output characteristic.

When you think there's a cause-effect link between two indicators (e.g., calories consumed and weight gain) then you can use the scatter plot diagram to prove or disprove it. If the points are tightly clustered along the trend line, then there's probably a strong correlation. If it looks more like a shotgun blast, there is no correlation.

If the R2 correlation of determination (square of the correlation coefficient) is greater than 0.8, then 80% of the variability in the data is accounted for by the equation. Most statistics books imply that this means that you have a strong correlation.

Here is an example of the scatter plot diagram with the correlation coefficient and correlation of determination created

How to Create a Scatter Plot Diagram

1. Collect 20 or more paired data samples. 2. Label the "effect" variable on left-hand (Y) axis. 3. Label the suspected "cause" variable on the bottom (X) axis. 4. Plot all of the paired points (the QI Macros will do this for you). 5. Analyze the graph for relationships among the variables.

When to Use

When you have paired numerical data. When your dependent variable may have multiple values for each value of your independent

variable. When trying to determine whether the two variables are related, such as when trying to identify

potential root causes of problems.

Page 110: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Scatter-2

After brainstorming causes and effects using a fishbone diagram, to determine objectively whether a particular cause and effect are related.

When determining whether two effects that appear to be related both occur with the same cause.

When testing for autocorrelation before constructing a control chart.

Procedure

1. Collect pairs of data where a relationship is suspected. 2. Draw a graph with the independent variable on the horizontal axis and the dependent variable

on the vertical axis. For each pair of data, put a dot or a symbol where the x-axis value intersects the y-axis value. (If two dots fall together, put them side by side, touching, so that you can see both.)

3. Look at the pattern of points to see if a relationship is obvious. If the data clearly form a line or a curve, you may stop. The variables are correlated. You may wish to use regression or correlation analysis now. Otherwise, complete steps 4 through 7.

4. Divide points on the graph into four quadrants. If there are X points on the graph, 5. Count X/2 points from top to bottom and draw a horizontal line. 6. Count X/2 points from left to right and draw a vertical line. 7. If number of points is odd, draw the line through the middle point. 8. Count the points in each quadrant. Do not count points on a line. 9. Add the diagonally opposite quadrants. Find the smaller sum and the total of points in all

quadrants. 10. A = points in upper left + points in lower right 11. B = points in upper right + points in lower left 12. Q = the smaller of A and B 13. N = A + B 14. Look up the limit for N on the trend test table. 15. If Q is less than the limit, the two variables are related. 16. If Q is greater than or equal to the limit, the pattern could have occurred from random chance.

Example: 1

The ZZ-400 manufacturing team suspects a relationship between product purity (percent purity) and the amount of iron (measured in parts per million or ppm). Purity and iron are plotted against each other as a scatter diagram, as shown in the figure below. There are 24 data points. Median lines are drawn so that 12 points fall on each side for both percent purity and ppm iron.

To test for a relationship, they calculate:

Page 111: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Scatter-3

A = points in upper left + points in lower right = 9 + 9 = 18 B = points in upper right + points in lower left = 3 + 3 = 6 Q = the smaller of A and B = the smaller of 18 and 6 = 6 N = A + B = 18 + 6 = 24

Then they look up the limit for N on the trend test table. For N = 24, the limit is 6. Q is equal to the limit. Therefore, the pattern could have occurred from random chance, and no relationship is demonstrated.

Scatter Diagram Example

Considerations

Here are some examples of situations in which might you use a scatter diagram: Variable A is the temperature of a reaction after 15 minutes. Variable B measures the color of

the product. You suspect higher temperature makes the product darker. Plot temperature and color on a scatter diagram.

Variable A is the number of employees trained on new software, and variable B is the number of calls to the computer help line. You suspect that more training reduces the number of calls. Plot number of people trained versus number of calls.

To test for autocorrelation of a measurement being monitored on a control chart, plot this pair of variables: Variable A is the measurement at a given time. Variable B is the same measurement, but at the previous time. If the scatter diagram shows correlation, do another diagram where variable B is the measurement two times previously. Keep increasing the separation between the two times until the scatter diagram shows no correlation.

Even if the scatter diagram shows a relationship, do not assume that one variable caused the other. Both may be influenced by a third variable.

When the data are plotted, the more the diagram resembles a straight line, the stronger the relationship.

If a line is not clear, statistics (N and Q) determine whether there is reasonable certainty that a relationship exists. If the statistics say that no relationship exists, the pattern could have occurred by random chance.

If the scatter diagram shows no relationship between the variables, consider whether the data might be stratified.

If the diagram shows no relationship, consider whether the independent (x-axis) variable has been varied widely. Sometimes a relationship is not apparent because the data don’t cover a wide enough range.

Think creatively about how to use scatter diagrams to discover a root cause. Drawing a scatter diagram is the first step in looking for a relationship between variables.

Page 112: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Scatter-4

Example: 2 Determine the relationship (graphically) between the depth of cut in a milling operation and the amount of the tools, where as listed that 40 observations the process such that the depth of cut is varied over a range of values and the corresponding amount of the tool wear.

X Y X Y X Y X Y 1 2.1 0.035 11 2.6 0.039 21 5.6 0.073 31 3.0 0.0322 4.2 0.041 12 5.2 0.056 22 4.7 0.064 32 3.6 0.0383 1.5 0.031 13 4.1 0.048 23 1.9 0.030 33 1.9 0.0324 1.8 0.027 14 3.0 0.037 24 2.4 0.029 34 5.1 0.0525 2.3 0.033 15 2.2 0.028 25 3.2 0.039 35 4.7 0.0506 3.8 0.045 16 4.6 0.057 26 3.4 0.038 36 5.2 0.0587 2.6 0.038 17 4.8 0.060 27 2.8 0.040 37 4.1 0.0488 4.3 0.047 18 5.3 0.068 28 2.2 0.031 38 4.3 0.0499 3.4 0.040 19 3.9 0.048 29 2.0 0.033 39 3.8 0.04210 4.5 0.058 20 3.5 0.036 30 2.9 0.035 40 3.6 0.045

Note: X = dept of cut (in mm) Y = Tool wear (in mm)

Scatter Plot of Tool Wear Vs. Depth of Cut

0

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

0 1 2 3 4 5 6

Dept of Cut (in mm)

To

ol W

ear

(in

mm

)

POSITIVE CORRELATION

Example: 3 Determine the relationship (graphically) between the temperature and pressure in listed below

Tmp Pre Tmp Pre Tmp Pre 1 180 80 11 190 60 21 210 55 2 190 60 12 200 70 22 230 50 3 160 80 13 230 50 23 200 40 4 200 40 14 240 45 24 240 40 5 210 45 15 240 30 25 250 35 6 190 50 16 220 40 26 230 45 7 220 50 17 250 30 27 220 40 8 240 35 18 180 70 28 180 70 9 220 50 19 190 75 29 210 60 10 210 40 20 200 65 30 220 55

Page 113: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Scatter-5

Temperature Vs. Pressure

0

20

40

60

80

100

0 50 100 150 200 250 300

Temperature

Pre

ssu

re

NEGATIVE CORRELATION/ POSSIBLE NEGATIVE CORRELATION

Example: 4 Determine the relationship (graphically) between the speed and length in listed below

No Speed Length No Speed Length No Speed Length No Speed Length No Speed Length

1 5.1 46 11 5.0 26 21 5.5 30 31 3.7 20 41 5.0 31

2 4.7 30 12 5.0 41 22 4.4 14 32 5.1 35 42 3.9 30

3 4.4 39 13 4.2 29 23 4.2 30 33 6.0 52 43 4.6 34

4 2.8 27 14 3.0 10 24 3.6 16 34 4.1 21 44 3.5 34

5 4.6 28 15 3.3 20 25 3.3 20 35 4.6 24 45 2.5 20

6 3.8 25 16 3.7 24 26 5.0 40 36 5.5 29 46 3.0 25

7 4.9 35 17 5.2 34 27 2.5 13 37 4.5 15 47 2.5 23

8 3.3 15 18 5.1 36 28 3.9 25 38 5.0 30 48 4.6 28

9 4.0 38 19 3.3 23 29 4.0 20 39 2.2 10 49 5.6 20

10 5.0 36 20 3.5 11 30 4.5 22 40 3.5 25 50 3.3 26

Speed Vs. Length

0

10

20

30

40

50

60

0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0Speed

Len

gth

POSSIBLE POSITIVE CORRELATION

Page 114: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Ishikawa-1

ISHIKAWA FISHBONE DIAGRAM

or

CAUSE & EFFECT DIAGRAM

Definition: A graphic tool used to explore and display opinion about sources of variation in a process. (Also called a Cause-and-Effect or Fishbone Diagram). The fishbone diagram identifies many possible causes for an effect or problem. It can be used to structure a brainstorming session. It immediately sorts ideas into useful categories.

A cause & effect diagram is a picture composed of lines and symbols designed a meaningful relationship between an effect and it causes. It was developed by Dr. Kaoru Ishikawa in 1943 and is sometimes refereed to as an Ishikawa diagram.

The diagram cause-and effect diagram has nearly unlimited application in research, manufacturing, marketing, office operations, and so forth. Use the Ishikawa fishbone diagram or cause and effect diagram to identify the special root causes of delay, waste, rework or cost. One of its strongest assets is the participation and contribution of everyone involved in the brainstorming process.

When to Use: When identifying possible causes for a problem. Especially when a team’s thinking tends to fall into ruts.

There are three main applications of cause-and-effect diagrams: Cause enumeration

is one the most widely used graphically techniques for quality control and improvement. It usually develops through a brainstorming session in which all possible causes (however remote they may be) are listed to show their influence on the problems (or effect) in question.

Dispersion analysis In dispersion analysis, each major cause is thoroughly analyzed by investigating the sub-causes and their impact on the quality characteristic (or effect) in question. This process repeated for each major cause in a prioritized order. In dispersion analysis, causes that don’t fir the selected categories are not listed. Hence, it is possible that the root cause will not be identified in dispersion analysis.

Process Analysis

When cause-and –effect diagram are constructed for process analysis, the emphasis is on listing the causes in the sequence in which the operations are actually conducted. This process is similar to creating a flow diagram, except that a cause-and-effect diagram lists in detail the causes that influence the quality characteristic of interest at each step of the production process.

Purpose: To arrive at a few key sources that contribute most significantly to the problem being examined. These sources are then targeted for improvement. The diagram also illustrates the relationships among the wide variety of possible contributors to the effect. Use the Ishikawa fishbone diagram or cause and effect diagram to identify the special root causes of delay, waste, rework or cost.

The figure below shows a simple Ishikawa diagram. Note that this tool is referred to by several different names: Ishikawa diagram, Cause-and-Effect diagram, Fishbone diagram, and Root Cause Analysis. The first name is after the inventor of the tool, Kaoru Ishikawa (1969) who first used the technique in the 1960s.

Page 115: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Ishikawa-2

The basic concept in the Cause-and-Effect diagram is that the name of a basic problem of interest is entered at the right of the diagram at the end of the main "bone". The main possible causes of the problem (the effect) are drawn as bones off of the main backbone. The "Four-M" categories are typically used as a starting point: "Materials", "Machines", "Manpower", and "Methods". Different names can be chosen to suit the problem at hand, or these general categories can be revised. The key is to have three to six main categories that encompass all possible influences. Brainstorming is typically done to add possible causes to the main "bones" and more specific causes to the "bones" on the main "bones". This subdivision into ever increasing specificity continues as long as the problem areas can be further subdivided. The practical maximum depth of this tree is usually about four or five levels. When the fishbone is complete, one has a rather complete picture of all the possibilities about what could be the root cause for the designated problem.

The Cause-and-Effect diagram can be used by individuals or teams; probably most effectively by a group. A typical utilization is the drawing of a diagram on a blackboard by a team leader who first presents the main problem and asks for assistance from the group to determine the main causes which are subsequently drawn on the board as the main bones of the diagram. The team assists by making suggestions and, eventually, the entire cause and effect diagram is filled out. Once the entire fishbone is complete, team discussion takes place to decide what are the most likely root causes of the problem. These causes are circled to indicate items that should be acted upon, and the use of the tool is complete.

The Ishikawa diagram, like most quality tools, is a visualization and knowledge organization tool. Simply collecting the ideas of a group in a systematic way facilitates the understanding and ultimate diagnosis of the problem. Several computer tools have been created for assisting in creating Ishikawa diagrams. A tool created by the Japanese Union of Scientists and Engineers (JUSE) provides a rather rigid tool with a limited number of bones. Other similar tools can be created using various commercial tools.

Only one tool has been created that adds computer analysis to the fishbone. Bourne et al. (1991) reported using Dempster-Shafer theory (Shafer and Logan, 1987) to systematically organize the beliefs about the various causes that contribute to the main problem. Based on the idea that the main problem has a total belief of one, each remaining bone has a belief assigned to it based on several factors; these include the history of problems of a given bone, events and their causal relationship to the bone, and the belief of the user of the tool about the likelihood that any particular bone is the cause of the problem.

The diagrams are useful in: Analyzing actual conditions for the purpose of product or service quality improvement, more

efficient use of resources, and reduced costs. Elimination of conditions causing nonconforming product or service and customer complaints. Standardization of existing and proposed operations Education and training of personnel in decision-making and corrective-action activities.

Page 116: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Ishikawa-3

Procedure Materials needed: flipchart or whiteboard, marking pens.

1. Agree on a problem statement (effect). Write it at the center right of the flipchart or whiteboard. Draw a box around it and draw a horizontal arrow running to it.

2. Brainstorm the major categories of causes of the problem. If this is difficult use generic headings:

o Methods o Machines (equipment) o People (manpower) o Materials o Measurement o Environment

3. Write the categories of causes as branches from the main arrow. 4. Brainstorm all the possible causes of the problem. Ask: “Why does this happen?” As each

idea is given, the facilitator writes it as a branch from the appropriate category. Causes can be written in several places if they relate to several categories.

5. Again ask “why does this happen?” about each cause. Write sub-causes branching off the causes. Continue to ask “Why?” and generate deeper levels of causes. Layers of branches indicate causal relationships.

6. When the group runs out of ideas, focus attention to places on the chart where ideas are few.

How to construct:

1. Place the main problem under investigation in a box on the right. 2. Have the team generate and clarify all the potential sources of variation. 3. Use an affinity diagram to sort the process variables into naturally related groups. The labels of

these groups are the names for the major bones on the Ishikawa diagram. 4. Place the process variables on the appropriate bones of the Ishikawa diagram. 5. Combine each bone in turn, insuring that the process variables are specific, measurable, and

controllable. If they are not, branch or "explode" the process variables until the ends of the branches are specific, measurable, and controllable.

To complete an Ishikawa fishbone diagram or cause and effect diagram: 1. Put the problem statement in the head of the fish and the major causes at the end of the major

bones. Major causes include: Processes, machines, materials, measurement, people, environment Steps of a process (step 1, step 2, etc.). Whatever makes sense

2. Begin with the most likely main cause. 3. For each cause, ask "Why?" up to five times. 4. Circle one-to-five ROOT causes (end of "why" chain)

5. Verify the root causes with data (Pareto or Scatter Diagram)

Tip:

Take care to identify causes rather than symptoms. Post diagrams to stimulate thinking and get input from other staff. Self-adhesive notes can be used to construct Ishikawa diagrams. Sources of variation

can be rearranged to reflect appropriate categories with minimal rework. Insure that the ideas placed on the Ishikawa diagram are process variables, not special

caused, other problems, tampering, etc. Review the quick fixes and rephrase them, if possible, so that they are process variables.

Common Causes of Variation

a. Process Cause-Effect Diagram

Each step of the process is a major cause.

Page 117: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Ishikawa-4

b. Generic Cause-Effect Diagram People, process, machines, materials, and measurement

are the major causes.

c. Custom Cause-Effect Diagram Use the generic categories but adapt them to your

process: service representative, service order, software systems, reference manuals, and access.

Example

This fishbone diagram was drawn by a manufacturing team to try to understand the source of periodic iron contamination. The team used the six generic headings to prompt ideas. Layers of branches show thorough thinking about the causes of the problem.

For example, under the heading “Machines,” the idea “materials of construction” shows four kinds of equipment and then several specific machine numbers. Note that some ideas appear in two different places. “Calibration” shows up under “Methods” as a factor in the analytical procedure, and also under “Measurement” as a cause of lab error. “Iron tools” can be considered a “Methods” problem when taking samples or a “Manpower” problem with maintenance personnel.

Page 118: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Ishikawa-5

Page 119: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -1

RUN CHARTS/TIME PLOT/TREND CHART

OVERVIEW

PURPOSE In-depth view into Run Charts--a quality improvement technique; how Run charts are used to monitor processes; how using Run charts can lead to improved process quality. The purpose of using control charts is

to help prevent the process from going out of control

The control charts help detect the assignable causes of variation on time so that appropriate actions can be taken to bring the process back in control.

To keep from making adjustments when they are not needed.

Most production processes allow operators a certain level of leeway to make adjustments on the machines that they are using when it is necessary. Yet over adjusting machines can have negative impacts on the output. Control charts can indicate when the adjustments are necessary and when they are not.

To determine the natural range (control limits) of a process and to compare it to its specified limits.

If the range of the control limits is wider than the one of the specified limits, the production process will need to be adjusted.

to inform about the process capabilities and stability The process capability refers to its ability to constantly deliver products that are within the

specified limits and the stability refers to the quality auditor's ability to predict the process trends based on past experience. A long term analysis of the control charts can help monitor the machine long term capabilities. Machine wear out will reflect on the production output

to fulfill the need of a constant process monitoring

Samples need to be taken on a regular basis and tested to make sure that the quality of the products sent to the customers meets their expectations.

USAGE Run charts are used to analyze processes according to time or order. Run charts are useful in discovering patterns that occur over time.

KEY TERMS

Trends: Trends are patterns or shifts according to time. An upward trend, for instance, would contain a

section of data points that increased as time passed.

Population: A population is the entire data set of the process. If a process produces one thousand parts a

day, the population would be the one thousand items.

Sample: A sample is a subgroup or small portion of the population that is examined when the entire population can not be evaluated. For instance, if the process does produce one thousand items a day, the sample size could be perhaps three hundred.

HISTORY Run charts originated from control charts, which were initially designed by Walter Shewhart. Walter Shewhart was a statistician at Bell Telephone Laboratories in New York. Shewhart developed a system for bringing processes into statistical control by developing ideas which would allow for a system to be controlled using control charts. Run charts evolved from the development of these control charts, but run charts focus more on time patterns while a control chart focuses more on acceptable limits of the process. Shewhart's discoveries are the basis of what as known as SQC or Statistical Quality Control.

Page 120: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -2

INSTRUCTIONS FOR CREATING A CHART

Step 1 : Gathering Data To begin any run chart, some type of process or operation must be available to take measurements for analysis. Measurements must be taken over a period of time. The data must be collected in a chronological or sequential form. You may start at any point and end at any point. For best results, at least 25 or more samples must be taken in order to get an accurate run chart.

Step 2 : Organizing Data Once the data has been placed in chronological or sequential form, it must be divided into two sets of values x and y. The values for x represent time and the values for y represent the measurements taken from the manufacturing process or operation.

Step 3 : Charting Data Plot the y values versus the x values by hand or by computer, using an appropriate scale that will make the points on the graph visible. Next, draw vertical lines for the x values to separate time intervals such as weeks. Draw horizontal lines to show where trends in the process or operation occur or will occur.

Step 4 : Interpreting Data After drawing the horizontal and vertical lines to segment data, interpret the data and draw any conclusions that will be beneficial to the process or operation. Some possible outcomes are:

Trends in the chart Cyclical patterns in the data Observations from each time interval are consistent

EXAMPLE : 1

Problem Scenario You have just moved into a new area that you are not familiar with. Your desire is to arrive at work on time, but you have noticed over your first couple of weeks on the job that it doesn't take the same amount of time each day of the week. You decide to monitor the amount of time it takes to get to work over the next four weeks and construct a run chart.

Step 1: Gathering Data Collect measurements each day over the next four weeks. Organize and record the data in chronological or sequential form.

M T W TH F

WEEK 1 33 28 26.5 28 26

WEEK 2 35 30.5 28 26 25.5

WEEK 3 34.5 29 28 26 25

WEEK 4 34 29.5 27 27 25.5

Step 2: Organizing Data Determine what the values for the x (time, day of week) and day (data, minutes to work) axis will be.

Step 3: Charting Data Plot the y values versus the x values by hand or by computer using the appropriate scale. Draw horizontal or vertical lines on the graph where trends or inconsistencies occur.

Page 121: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -3

Step 4: Interpreting Data Interpret results and draw any conclusions that are important. An overall decreasing trend occurs each week with Mondays taking the most amount of time and Fridays generally taking the least amount of time. Therefore you accordingly allow yourself more time on Mondays to arrive to work on time.

CONTROL CHART

What is it?

A Control Chart is a tool you can use to monitor a process. It graphically depicts the average value and the upper and lower control limits (the highest and lowest values) of a process.

A graphical tool for monitoring changes that occur within a process, by distinguishing variation that is inherent in the process(common cause) from variation that yield a change to the process(special cause). This change may be a single point

or a series of points in time - each is a signal that something is different from what was previously observed and measured.

A control chart (also called process chart or quality control chart) is a graph that shows whether a sample of data falls within the common or normal range of variation. A control chart has upper and lower control limits that separate common from assignable causes of variation. The common range of variation is defined by the use of control limits. We say that a process is out of control when a plot of data reveals that one or more samples fall outside the control limits.

The control chart, also known as the 'Shewhart chart' or 'process-behaviour chart' is a statistical tool intended to assess the nature of variation in a process and to facilitate forecasting and management. A control chart is a more specific kind of a run chart.

The control chart was invented by Walter A. Shewhart while working for Bell Labs in the 1920s. The company's engineers had been seeking to improve the reliability of their telephony transmission systems. Because amplifiers and other equipment had to be buried underground, there was a business need to reduce the frequency of failures and repairs. By 1920 they had already realised the importance of reducing variation in a manufacturing process. Moreover, they had realised that continual process-adjustment in reaction to non-conformance actually increased variation and degraded quality. Shewhart framed the problem in terms of Common- and special-causes of variation and, on May 16, 1924, wrote an internal memo introducing the control chart as a tool for distinguishing between the two. Dr. Shewhart's boss, George Edwards, recalled: "Dr. Shewhart prepared a little memorandum only about a page in length. About a third of that page was given over to a simple diagram which we would all recognize today as a schematic control chart. That diagram, and the short text which preceded and followed it, set forth all of the essential principles and considerations which are involved in what we know today as process quality control."

[1] Shewhart stressed that bringing a production process into a

state of statistical control, where there is only common-cause variation, and keeping it in control, is necessary to predict future output and to manage a process economically.

Page 122: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -4

Dr. Shewhart created the basis for the control chart and the concept of a state of statistical control by carefully designed experiments. While Dr. Shewhart drew from pure mathematical statistical theories, he understood data from physical processes never produce a "normal distribution curve" (a Gaussian distribution, also commonly referred to as a "bell curve"). He discovered that observed variation in manufacturing data did not always behave the same way as data in nature (Brownian motion of particles). Dr. Shewhart concluded that while every process displays variation, some processes display controlled variation that is natural to the process, while others display uncontrolled variation that is not present in the process causal system at all times.

A control chart is a run chart of a sequence of quantitative data with five horizontal lines drawn on the chart:

A centre line, drawn at the process mean; An upper warning limit drawn two standard deviations above the centre line; An upper control-limit (also called an upper natural process-limit drawn three standard

deviations above the centre line; A lower warning limit drawn two standard deviations below the centre line; A lower control-limit (also called a lower natural process-limit drawn three standard deviations

below the centre line.

Common cause variation plots as an irregular pattern, mostly within the control limits. Any observations outside the limits, or patterns within, suggest (signal) a special-cause (see Rules below). The run chart provides a context in which to interpret signals and can be beneficially annotated with events in the business.

Why use it? All processes have some form of variation. A Control Chart helps you distinguish between normal and unusual variation in a

process. If you want to reduce the amount of variation in a process, you need to compare the results of the process with a standard.

Variation can exist for two reasons:

1. Common causes are flaws inherent in the design of the process.2. Special causes are variations from standards caused by employees or by unusual

circumstances or events.

Most variations in processes are caused by flaws in the system or the process, not by the employees. Once you realize this, you can stop blaming the employees and start changing the systems and processes that cause the employees to make mistakes. (It is important to remember, however, that some variations are not "mistakes" introduced by employees, but, rather, they are innovations. Some variations are deliberately introduced to processes by employees specifically because these variations are found to be more practical.)

Different types of control charts can be used, depending upon the type of data. The two broadest groupings are for variable data and attribute data.

Variable data are measured on a continuous scale. For example: time, weight, distance or temperature can be measured in fractions or decimals. The possibility of measuring to greater precision defines variable data. Attribute data are counted and cannot have fractions or decimals. Attribute data arise when you are determining only the presence or absence of something: success or failure, accept or reject, correct or not correct. For example, a report can have four errors or five errors, but it cannot have four and a half errors.

Variables charts o –X and R chart (also called averages and range chart) o –X and s chart o chart of individuals (also called X chart, X-R chart, IX-MR chart, Xm R chart, moving

range chart) o moving average–moving range chart (also called MA–MR chart) o target charts (also called difference charts, deviation charts and nominal charts) o CUSUM (also called cumulative sum chart) o EWMA (also called exponentially weighted moving average chart)

Page 123: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -5

o multivariate chart (also called Hotelling T2) Attributes charts

o p chart (also called proportion chart) o np chart o c chart (also called count chart) o u chart

Charts for either kind of data o short run charts (also called stabilized charts or Z charts) o group charts (also called multiple characteristic charts)

When to use it? First, you need to define the standards of how things should be. Then, you need to monitor (collect data) about processes in your organization. Then, you create a control graph using the monitoring data.

When controlling ongoing processes by finding and correcting problems as they occur. When predicting the expected range of outcomes from a process. When determining whether a process is stable (in statistical control). When analyzing patterns of process variation from special causes (non-routine events) or

common causes (built into the process). When determining whether your quality improvement project should aim to prevent specific

problems or to make fundamental changes to the process.

The control chart is a graph used to study how a process changes over time. Data are plotted in time order. A control chart always has a central line for the average, an upper line for the upper control limit and a lower line for the lower control limit. These lines are determined from historical data. By comparing current data to these lines, you can draw conclusions about whether the process variation is consistent (in control) or is unpredictable (out of control, affected by special causes of variation).

Control charts for variable data are used in pairs. The top chart monitors the average, or the centering of the distribution of data from the process. The bottom chart monitors the range, or the width of the distribution. If your data were shots in target practice, the average is where the shots are clustering, and the range is how tightly they are clustered. Control charts for attribute data are used singly.

How to use it:

Basic Procedure

1. Choose the appropriate control chart for your data. 2. Determine the appropriate time period for collecting and plotting data. 3. Collect data, construct your chart and analyze the data. 4. Look for “out-of-control signals” on the control chart. When one is identified, mark it on the chart

and investigate the cause. Document how you investigated, what you learned, the cause and how it was corrected.

Page 124: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -6

For Construction

1. Select the process to be charted and decide on the type of control chart to use. o Use a Percent Nonconforming Chart (more information available from Health Tactics P

Chart) if you have data measured using two outcomes (for example, the billing can be correct or incorrect).

o Use an Average and Range Control Chart (more information available from Health Tactics X-R Chart) if you have data measured using a continuous scale (for example, waiting time in the health center).

2. Determine your sampling method and plan: o Choose the sample size (how many samples will you obtain?). o Choose the frequency of sampling, depending on the process to be evaluated (months,

days, years?). o Make sure you get samples at random (don't always get data from the same person, on

the same day of the week, etc.).

3. Start data collection: o Gather the sampled data. o Record data on the appropriate control graph.

4. Calculate the appropriate statistics (the control limits) depending on the type of graph.

5. Observation: The control graph is divided into zones: ______________________________ Upper Control Limit (UCL)

______________________________ Standard (average)

______________________________ Lower Control Limit (LCL)

6. Interpret the graph: If the data fluctuates within the limits, it is the result of common causes within the process

(flaws inherent in the process) and can only be affected if the system is improved or changed.

If the data falls outside of the limits, it is the result of special causes (in human service organizations, special causes can include bad instruction, lack of training, ineffective processes, or inadequate support systems).

These special causes must be eliminated before the control chart can be used as a monitoring tool. In a health setting, for example, staff may need better instruction or training, or processes may need to be improved, before the process is "under control." Once the process is "under control," samples can be taken at regular intervals to assure that the process does not fundamentally change.

A process is said to be "out of control" if one or more points falls outside the control limits.

7. Continue to plot data as they are generated. As each new data point is plotted, check for new out-of-control signals.

8. When you start a new control chart, the process may be out of control. If so, the control limits calculated from the first 20 points are conditional limits. When you have at least 20 sequential points from a period when the process is operating in control, recalculate control limits.

HOW TO BUILD A CONTROL CHART

The control charts we are addressing are created for a production process in progress. Samples are taken from the production lines at given time intervals and tested to determine whether they are in conformance with the specifications and their level of conformance are plotted on the charts and monitored.

Let's consider Y, a sample statistic that measures a Critical-To-Quality characteristic of a product

(length, color, or thickness…), with a mean and a standard deviation . The Upper Control Limit (UCL), the Center Line (CL) and the Lower Control Limit (LCL) for the control chart will be given as follow:

Page 125: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -7

Where k is the distance between the center line and the control limits.

EXAMPLE: 2 Consider the length as being the critical characteristic of manufactured bolts. The mean length of the bolts is 17 inches with a known standard deviation of 0.01. A sample of 5 bolts is taken every half an hour for testing and the mean of the sample is computed and plotted on the chart. That control chart will

be called control chart because it plots the means of the samples.

Based on the Central Limit theorem, we can determine the sample standard deviation and the mean.

The mean will still be the same as the population's mean, 17. For three sigma control limits, we will have:

Control limits on a control chart should be readjusted every time a significant shift in the process occurs.

A typical control chart is made up of at least four lines: a vertical line that measures the levels of the samples' means, the two outmost horizontal lines represent the UCL and the LCL and the Center Line represents the mean. If all the points plot in between the UCL and the LCL in a random manner , the process is considered to be in control.

What is meant by in control process is not a total absence of variation but instead, when the variations are present, they exhibit a random pattern, they are not outside the control limits and based on past experience, they can be predicted and are strictly due to common causes. The control charts are an effective tool for detecting the special causes of variation

The following chart depicts a process in control and within the specified limits. The Normal curve on the left side shows the specified (desired) limits of the production process while the right chart is the control chart. The specification limits determine whether the products meet the customers' expectations while the control limits determine whether the process is under statistical control. These two charts are completely separate entities. There is no statistical relationship between the specification limits and the control limits.

Page 126: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -8

If some points are outside the control limits, this will indicate that the process is out of control and corrective actions need to be taken.

Let's note that a process with all the points in between the control limits is not necessarily synonymous with acceptable process. A process can be in control with a high variability or too many of the plotted points are too close to one control limit and away from the target.

The following chart is a good example of an out of control process with all the points plotted within the control limits.

In this example, points A, B, C, D, E and F are all well within the limits but they do not behave randomly, they exhibit a run up pattern, in other words they follow a steady (increasing) trend. The causes of this run up pattern need to be investigated because it might be the result of a problem with the process.

The interpretation of the control charts patterns is not easy and requires experience and know-how.

Out of Control

o A single point outside the control limits. In Figure 1, point sixteen is above the UCL (upper control limit).

o Two out of three successive points are on the same side of the centerline and farther than 2 from it. In Figure 1, point 4 sends that signal.

o Four out of five successive points are on the same side of the centerline and farther than 1 from it. In Figure 1, point 11 sends that signal.

o A run of eight in a row are on the same side of the centerline. Or 10 out of 11, 12 out of 14 or 16 out of 20. In Figure 1, point 21 is eighth in a row above the centerline.

o Obvious consistent or persistent patterns that suggest something unusual about your data and your process.

Figure 1 Out-of-control signals

When the samples plotted in the control chart are not of equal size, then the control limits around the center line (target specification) cannot be represented by a straight line. For example, to return to the formula Sigma/Square Root(n) presented earlier for computing control limits for the X-bar chart, it is obvious that unequal n's will lead to different control limits for different sample sizes. There are three ways of dealing with this situation.

Page 127: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -9

Average sample size. If one wants to maintain the straight-line control limits (e.g., to make the chart easier to read and easier to use in presentations), then one can compute the average n per sample across all samples, and establish the control limits based on the average sample size. This procedure is not "exact," however, as long as the sample sizes are reasonably similar to each other, this procedure is quite adequate.

Variable control limits. Alternatively, one may compute different control limits for each sample, based on the respective sample sizes. This procedure will lead to variable control limits, and result in step-chart like control lines in the plot. This procedure ensures that the correct control limits are computed for each sample. However, one loses the simplicity of straight-line control limits.

Stabilized (normalized) chart. The best of two worlds (straight line control limits that are accurate) can be accomplished by standardizing the quantity to be controlled (mean, proportion, etc.) according to units of sigma. The control limits can then be expressed in straight lines, while the location of the sample points in the plot depend not only on the characteristic to be controlled, but also on the respective sample n's. The disadvantage of this procedure is that the values on the vertical (Y) axis in the control chart are in terms of sigma rather than the original units of measurement, and therefore, those numbers cannot be taken at face value (e.g., a sample with a value of 3 is 3 times sigma away from specifications; in order to express the value of this sample in terms of the original units of measurement, we need to perform some computations to convert this number back)

What are the types of Control Charts? There are two main categories of Control Charts, those that display attribute data, and those that display variables data

Common Types of Charts The types of charts are often classified according to the type of quality characteristic that they are supposed to monitor: there are quality control charts for variables and control charts for attributes.

Attribute Data: This category of control chart displays data that result from counting the number of occurrences or items in a single category of similar items or occurrences. These “count” data may be expressed as pass/fail, yes/no, or presence/absence of a defect.

Variable Data: This category of control chart displays values resulting from the measurement of a continuous variable. Examples of variables data are elapsed time, temperature, and radiation dose.

Short Run Charts for Variables

Specifically, the following charts are commonly constructed for controlling variables:

X-Bar chart. In this chart the sample means are plotted in order to control the mean value of a variable (e.g., size of piston rings, strength of materials, etc.).

R chart. In this chart, the sample ranges are plotted in order to control the variability of a variable.

S chart. In this chart, the sample standard deviations are plotted in order to control the variability of a variable.

Nominal chart, target chart. There are several different types of short run charts. The most basic are the nominal short run chart, and the target short run chart. In these charts, the measurements for each part are transformed by subtracting a part-specific constant. These constants can either be the nominal values for the respective parts (nominal short run chart), or they can be target values computed from the (historical) means for each part (Target X-bar and R chart). For example, the diameters of piston bores for different engine blocks produced in a factory can only be meaningfully compared (for determining the consistency of bore sizes) if the mean differences between bore diameters for different sized engines are first removed. The nominal or target short run chart makes such comparisons possible. Note that for the nominal or target chart it is assumed that the variability across parts is identical, so that control limits based on a common estimate of the process sigma are applicable.

Standardized short run chart. If the variability of the process for different parts cannot be assumed to be identical, then a further transformation is necessary before the sample means for different parts can

Page 128: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -10

be plotted in the same chart. Specifically, in the standardized short run chart the plot points are further transformed by dividing the deviations of sample means from part means (or nominal or target values for parts) by part-specific constants that are proportional to the variability for the respective parts. For example, for the short run X-bar and R chart, the plot points (that are shown in the X-bar chart) are computed by first subtracting from each sample mean a part specific constant (e.g., the respective part mean, or nominal value for the respective part), and then dividing the difference by another constant, for example, by the average range for the respective chart. These transformations will result in comparable scales for the sample means for different parts.

Short Run Charts for Attributes

For attribute control charts (C, U, Np, or P charts), the estimate of the variability of the process (proportion, rate, etc.) is a function of the process average (average proportion, rate, etc.; for example, the standard deviation of a proportion p is equal to the square root of p*(1- p)/n). Hence, only standardized short run charts are available for attributes. For example, in the short run P chart, the plot points are computed by first subtracting from the respective sample p values the average part p's, and then dividing by the standard deviation of the average p's

For controlling quality characteristics that represent attributes of the product, the following charts are commonly constructed:

C chart. In this chart (see example below), we plot the number of defectives (per batch, per day, per machine, per 100 feet of pipe, etc.). This chart assumes that defects of the quality attribute are rare, and the control limits in this chart are computed based on the Poissondistribution (distribution of rare events).

U chart. In this chart we plot the rate of defectives, that is, the number of defectives divided by the number of units inspected (the n; e.g., feet of pipe, number of batches). Unlike the C chart, this chart does not require a constant number of units, and it can be used, for example, when the batches (samples) are of different sizes.

Np chart. In this chart, we plot the number of defectives (per batch, per day, per machine) as in the C chart. However, the control limits in this chart are not based on the distribution of rare events, but rather on the binomial distribution. Therefore, this chart should be used if the occurrence of defectives is not rare (e.g., they occur in more than 5% of the units inspected). For example, we may use this chart to control the number of units produced with minor flaws. P chart. In this chart, we plot the percent of defectives (per batch, per day, per machine, etc.) as in the U chart. However, the control limits in this chart are not based on the distribution of rare events but rather on the binomial distribution (of proportions). Therefore, this chart is most applicable to situations where the occurrence of defectives is not rare (e.g., we expect the percent of defectives to be more than 5% of the total number of units produced).

Unequal Sample Sizes

When the samples plotted in the control chart are not of equal size, then the control limits around the center line (target specification) cannot be represented by a straight line. For example, to return to the formula Sigma/Square Root(n) presented earlier for computing control limits for the X-bar chart, it is obvious that unequal n's will lead to different control limits for different sample sizes. There are three ways of dealing with this situation.

Average sample size. If one wants to maintain the straight-line control limits (e.g., to make the chart easier to read and easier to use in presentations), then one can compute the average nper sample across all samples, and establish the control limits based on the average sample size. This procedure is not "exact," however, as long as the sample sizes are reasonably similar to each other, this procedure is quite adequate.

Variable control limits. Alternatively, one may compute different control limits for each sample, based on the respective sample sizes. This procedure will lead to variable control limits, and result in step-chart like control lines in the plot. This procedure ensures that the correct control limits are computed for each sample. However, one loses the simplicity of straight-line control limits.

Stabilized (normalized) chart. The best of two worlds (straight line control limits that are accurate) can be accomplished by standardizing the quantity to be controlled (mean, proportion, etc.) according to units of sigma. The control limits can then be expressed in straight lines, while the location of the sample points in the plot depend not only on the characteristic to

Page 129: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -11

be controlled, but also on the respective sample n's. The disadvantage of this procedure is that the values on the vertical (Y) axis in the control chart are in terms of sigma rather than the original units of measurement, and therefore, those numbers cannot be taken at face value (e.g., a sample with a value of 3 is 3 times sigma away from specifications; in order to express the value of this sample in terms of the original units of measurement, we need to perform some computations to convert this number back).

Control Chart for Individual Observations

Variable control charts can by constructed for individual observations taken from the production line, rather than samples of observations. This is sometimes necessary when testing samples of multiple observations would be too expensive, inconvenient, or impossible. For example, the number of customer complaints or product returns may only be available on a monthly basis; yet, one would like to chart those numbers to detect quality problems. Another common application of these charts occurs in cases when automated testing devices inspect every single unit that is produced. In that case, one is often primarily interested in detecting small shifts in the product quality (for example, gradual deterioration of quality due to machine wear). The CUSUM, MA, and EWMA charts of cumulative sums and weighted averages discussed below may be most applicable in those situations.

Control Charts for Variables vs. Charts for Attributes

Sometimes, the quality control engineer has a choice between variable control charts and attribute control charts.

Advantages of attribute control charts. Attribute control charts have the advantage of allowing for quick summaries of various aspects of the quality of a product, that is, the engineer may simply classify products as acceptable or unacceptable, based on various quality criteria. Thus, attribute charts sometimes bypass the need for expensive, precise devices and time-consuming measurement procedures. Also, this type of chart tends to be more easily understood by managers unfamiliar with quality control procedures; therefore, it may provide more persuasive (to management) evidence of quality problems.

Advantages of variable control charts. Variable control charts are more sensitive than attribute control charts (see Montgomery, 1985, p. 203). Therefore, variable control charts may alert us to quality problems before any actual "unacceptables" (as detected by the attribute chart) will occur. Montgomery (1985) calls the variable control charts leading indicators of trouble that will sound an alarm before the number of rejects (scrap) increases in the production process.

Out-Of-Control Process: Runs Tests

As mentioned earlier in the introduction, when a sample point (e.g., mean in an X-bar chart) falls outside the control lines, one has reason to believe that the process may no longer be in control. In addition, one should look for systematic patterns of points (e.g., means) across samples, because such patterns may indicate that the process average has shifted. These tests are also sometimes referred to as AT&T runs rules (see AT&T, 1959) or tests for special causes (e.g., see Nelson, 1984, 1985; Grant and Leavenworth, 1980; Shirland, 1993). The term special or assignable causes as opposed to chance or common causes was used by Shewhart to distinguish between a process that is in control, with variation due to random (chance) causes only, from a process that is out of control, with variation that is due to some non-chance or special (assignable) factors (cf. Montgomery, 1991, p. 102).

As the sigma control limits discussed earlier, the runs rules are based on "statistical" reasoning. For example, the probability of any sample mean in an X-bar control chart falling above the center line is equal to 0.5, provided (1) that the process is in control (i.e., that the center line value is equal to the population mean), (2) that consecutive sample means are independent (i.e., not auto-correlated), and (3) that the distribution of means follows the normal distribution. Simply stated, under those conditions there is a 50-50 chance that a mean will fall above or below the center line. Thus, the probability that two consecutive means will fall above the center line is equal to 0.5 times 0.5 = 0.25.

Accordingly, the probability that 9 consecutive samples (or a run of 9 samples) will fall on the same side of the center line is equal to 0.5**9 = .00195. Note that this is approximately the probability with which a sample mean can be expected to fall outside the 3- times sigma limits (given the normal distribution, and a process in control). Therefore, one could look for 9 consecutive sample means on the same side

Page 130: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -12

of the center line as another indication of an out-of-control condition. Refer to Duncan (1974) for details concerning the "statistical" interpretation of the other (more complex) tests. Zone A, B, C. Customarily, to define the runs tests, the area above and below the chart center line is divided into three "zones."

Figure 2a. Zone Control

Figure 2b. Zone Control

By default, Zone A is defined as the area between 2 and 3 times sigma above and below the center line; Zone B is defined as the area between 1 and 2 times sigma, and Zone C is defined as the area between the center line and 1 times sigma.

Control charts interpretation

9 points in Zone C or beyond (on one side of central line). If this test is positive (i.e., if this pattern is detected), then the process average has probably changed. Note that it is assumed that the distribution of the respective quality characteristic in the plot is symmetrical around the mean. This is, for example, not the case for R charts, S charts, or most attribute charts. However, this is still a useful test to alert the quality control engineer to potential shifts in the process. For example, successive samples with less-than-average variability may be worth investigating, since they may provide hints on how to decrease the variation in the process.

6 points in a row steadily increasing or decreasing. This test signals a drift in the process average. Often, such drift can be the result of tool wear, deteriorating maintenance, improvement in skill, etc. (Nelson, 1985).

14 points in a row alternating up and down. If this test is positive, it indicates that two systematically alternating causes are producing different results. For example, one may be using two alternating suppliers, or monitor the quality for two different (alternating) shifts.

2 out of 3 points in a row in Zone A or beyond. This test provides an "early warning" of a process shift. Note that the probability of a false-positive (test is positive but process is in control) for this test in X-bar charts is approximately 2%.

Page 131: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -13

4 out of 5 points in a row in Zone B or beyond. Like the previous test, this test may be considered to be an "early warning indicator" of a potential process shift. The false- positive error rate for this test is also about 2%.

15 points in a row in Zone C (above and below the center line). This test indicates a smaller variability than is expected (based on the current control limits).

8 points in a row in Zone B, A, or beyond, on either side of the center line (without points in Zone C). This test indicates that different samples are affected by different factors, resulting in a bimodal distribution of means. This may happen, for example, if different samples in an X-bar chart where produced by one of two different machines, where one produces above average parts, and the other below average parts.

Control charts provide the operational definition of the term special cause. A special cause is simply anything which leads to an observation beyond a control limit. However, this simplistic use of control charts does not do justice to their power. Control charts are running records of the performance of the process and, as such, they contain a vast store of information on potential improvements. While some guidelines are presented here, control chart interpretation is an art that can only be developed by looking at many control charts and probing the patterns to identify the underlying system of causes at work.

Figure 3 Control chart patterns: freaks.

Freak patterns are the classical special cause situation. Freaks result from causes that have a large effect but that occur infrequently. When investigating freak values look at the cause-and-effect diagram for items that meet these criteria. The key to identifying freak causes is timelines in collecting and recording the data. If you have difficulty, try sampling more frequently.

Figure 4 Control chart patterns: drift.

Drift is generally seen in processes where the current process value is partly determined by the previous process state. For example, if the process is a plating bath, the content of the tank cannot change instantaneously, instead it will change gradually. Another common example is tool wear: the size of the tool is related to its previous size. Once the cause of the drift has been determined, the appropriate action can be taken. Whenever economically feasible, the drift should be eliminated, e.g., install an automatic chemical dispenser for the plating bath, or make automatic compensating adjustments to correct for tool wear. Note that the total process variability increases when drift is allowed, which adds cost. When this is not possible, the control chart can be modified in one of two ways:

1. Make the slope of the center line and control limits match the natural process drift. The control chart will then detect departures from the natural drift.

Page 132: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -14

2. Plot deviations from the natural or expected drift.

Figure 5 Control chart patterns: cycles.

Cycles often occur due to the nature of the process. Common cycles include hour of the day, day of the week, month of the year, quarter of the year, week of the accounting cycle, etc. Cycles are caused by modifying the process inputs or methods according to a regular schedule. The existence of this schedule and its effect on the process may or may not be known in advance. Once the cycle has been discovered, action can be taken. The action might be to adjust the control chart by plotting the control measure against a variable base. For example, if a day-of-the-week cycle exists for shipping errors because of the workload, you might plot shipping errors per 100 orders shipped instead of shipping errors per day. Alternatively, it may be worthwhile to change the system to smooth out the cycle. Most processes operate more efficiently when the inputs are relatively stable and when methods are changed as little as possible.

Figure 6 Control chart patterns: repeating patterns.

A controlled process will exhibit only "random looking" variation. A pattern where every nth item is different is, obviously, non-random. These patterns are sometimes quite subtle and difficult to identify. It is sometimes helpful to see if the average fraction defective is close to some multiple of a known number of process streams. For example, if the machine is a filler with 40 stations, look for problems that occur 1/40, 2/40, 3/40, etc., of the time.

Figure 7 Control chart patterns: discrete data.

When plotting measurement data the assumption is that the numbers exist on a continuum, i.e., there will be many different values in the data set. In the real world, the data are never completely continuous. It usually doesn’t matter much if there are, say, 10 or more different numbers. However, when there are only a few numbers that appear over-and-over it can cause problems with the analysis. A common problem is that the R chart will underestimate the average range, causing the control limits on both the average and range charts to be too close together. The result will be too many "false alarms" and a general loss of confidence in SPC.

Page 133: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -15

The usual cause of this situation is inadequate gage resolution. The ideal solution is to obtain a gage with greater resolution. Sometimes the problem occurs because operators, inspectors, or computers are rounding the numbers. The solution here is to record additional digits.

Figure 8 Control chart patterns: planned changes.

The reason SPC is done is to accelerate the learning process and to eventually produce an improvement. Control charts serve as historical records of the learning process and they can be used by others to improve other processes. When an improvement is realized the change should be written on the old control chart; its effect will show up as a less variable process. These charts are also useful in communicating the results to leaders, suppliers, customers, and others interested in quality improvement.

Figure 9 Control chart patterns: suspected differences.

Seemingly random patterns on a control chart are evidence of unknown causes of variation, which is not the same as uncaused variation. There should be an ongoing effort to reduce the variation from these so-called common causes. Doing so requires that the unknown causes of variation be identified. One way of doing this is a retrospective evaluation of control charts. This involves brainstorming and preparing cause and effect diagrams, then relating the control chart patterns to the causes listed on the diagram. For example, if "operator" is a suspected cause of variation, place a label on the control chart points produced by each operator. If the labels exhibit a pattern, there is evidence to suggest a problem. Conduct an investigation into the reasons and set up controlled experiments (prospective studies) to test any theories proposed. If the experiments indicate a true cause and effect relationship, make the appropriate process improvements. Keep in mind that a statistical association is not the same thing as a causal correlation. The observed association must be backed up with solid subject-matter expertise and experimental data.

Figure 10 Control chart patterns: mixture.

Mixture exists when there data from two different cause-systems are plotted on a single control chart. It indicates a failure in creating rational subgroups. The underlying differences should be identified and corrective action taken. The nature of the corrective action will determine how the control chart should be modified.

Page 134: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -16

Mixture example #1The mixture represents two different operators who can be made more consistent. A single control chart can be used to monitor the new, consistent process.

Mixture example #2The mixture is in the number of emergency room cases received on Saturday evening, versus the number received during a normal week. Separate control charts should be used to monitor patient-load during the two different time periods.

Figure 11 Types of Out of Control

Page 135: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -17

Page 136: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -18

When do we recalculate control limits? Since a control chart "compares" the current performance of the process characteristic to the past performance of this characteristic, changing the control limits frequently would negate any usefulness. So, only change your control limits if you have a valid, compelling reason for doing so. Some examples of reasons:

When you have at least 30 more data points to add to the chart and there have been no known changes to the process

- you get a better estimate of the variability If a major process change occurs and affects the way your process runs. If a known, preventable act changes the way the tool or process would behave (power goes

out, consumable is corrupted or bad quality, etc.)

What are the WECO (Western Electric Company) rules for signaling "Out of Control"? The WECO rules are based on probability. We know that, for a normal distribution, the probability of encountering a point outside ± 3 is 0.3%. This is a rare event. Therefore, if we observe a point outside the control limits, we conclude the process has shifted and is unstable. Similarly, we can identify other events that are equally rare and use them as flags for instability. The probability of observing two points out of three in a row between 2 and 3 and the probability of observing four points out of five in a row between 1 and 2 are also about 0.3%.

Page 137: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -19

The X-BAR and R CONTROL CHART

An Xbar & R Control Chart is one that shows both the mean value ( X ), and the range ( R ). The Xbar portion of the chart mainly shows any changes in the mean value of the process, while the R portion shows any changes in the dispersion of the process. This chart is particularly useful in that it shows changes in mean value and dispersion of the process at the same time, making it a very effective method for checking abnormalities within the process; and if charted while in progress, also points out a problem in the production flow in real time mode.

X-bar & Range Charts are a set of control charts for variables data (data that is both quantitative and continuous in measurement, such as a measured dimension or time). The X-bar chart monitors the process location over time, based on the average of a series of observations, called a subgroup. The Range chart monitors the variation between observations in the subgroup over time.

Our X-bar & Range Chart accurately accounts for subgroups of varying sizes using Burr weighting factors to determine process sigma (Burr,1969). For a general discussion of X-bar & R Charts, refer to any Statistical Quality Control textbook, such as (Pyzdek, 1990, 1992a) or (Montgomery, 1991), or The Memory Jogger (GOAL/QPC).

THE CONTROL CHART DEFINED Thus far in our training, we have learned that Histograms and Check sheets consolidate the data collected, to show the overall picture, while the Pareto diagram is used to indicate problem areas. However, for production purposes, we want to know more about the nature of changes that take place over a specified period of time, or as they occur in "real time".

Control charts are generally used in a production or manufacturing environment and are used to control, monitor and IMPROVE a process. Common causes are always present and generally attributed to machines, material and time vs. temperature. This normally takes a minor adjustme ent to the process to make the correction and return the process to a normal output. HOWEVER, when making a change to the process, it should always be a MINOR change. If a plot is observed that shows a slight deviation trend upward or downward, the "tweaking" adjustment should be a slight change, and then another observation should be made. Too often people will over-correct by making too big of an adjustment which then causes the process to dramatically shift in the other direction. For that reason, all changes to the process should be SLIGHT and GRADUAL!

A control chart is a graph or chart with limit lines, called control lines. There are basically three kinds of control lines:

the upper control limit (UCL), the central line (actual nominal size of product), the lower control limit (LCL).

The purpose of drawing a control chart is to detect any changes in the process that would be evident by any abnormal points listed on the graph from the data collected. If these points are plotted in "real time", the operator will immediately see that the point is exceeding one of the contol limits, or is heading in that direction, and can make an immediate adjustment. The operator should also record on the chart the cause of the drift, and what was done to correct the problem bringing the process back into a "state of control".

The method in which data is collected to be charted is as follows: A sampling plan is devised to measure parts and then to chart that measurement at a specified interval. The time interval and method of collection will vary. For our example, we will say that we collect data five times a day at specified time intervals. In making the control chart, the daily data is averaged out in order to obtain an average value for that day. Each of these values then becomes a point on the control chart that then represents the characteristics of that given day. To explain further, the five measurements made in one day constitute one sub group, or one plot point. In some manufacturing firms, measurements are taken every 15 minutes, and the four plots (a subgroup) are totaled and then an average value is calculated. This value then equals one plot for the hour, and that plot is placed on the chart; thus, one plot point on the chart every hour of the working day.

Page 138: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -20

It is when these plot points should fall outside the UCL or LCL, that some form of change must occur on the assembly or manufacturing line. Further, the cause needs to be investigated and have proper action taken to prevent it from happening again………called preventative action, and continuous improvement in the Quality world. The use of control charts is called "process control." In reality, however, a trend will develop that indicates the process is leading away from the center line, and corrective action is usually taken prior to a point exceeding one of the control limits. There are two main types of Control Charts. Certain data are based upon measurements, such as the measurement of unit parts. These are known as "indiscrete values" or "continuous data". Other types of data are based on counting, such as the number of defective articles or the number of defects. These are known as "discrete values" or "enumerated data".

WHEN TO USE an X-BAR/ R CHART X-bar / Range charts are used when you can rationally collect measurements in groups (subgroups) of between two and ten observations. Each subgroup represents a "snapshot" of the process at a given point in time. The charts' x-axes are time based, so that the charts show a history of the process. For this reason, you must have data that is time-ordered; that is, entered in the sequence from which it was generated. If this is not the case, then trends or shifts in the process may not be detected, but instead attributed to random (common cause) variation.

For subgroup sizes greater than ten, use X-bar / Sigma charts, since the range statistic is a poor estimator of process sigma for large subgroups. In fact, the subgroup sigma is ALWAYS a better estimate of subgroup variation than subgroup range. The popularity of the Range chart is only due to its ease of calculation, dating to its use before the advent of computers. For subgroup sizes equal to one, an Individual-X / Moving Range chart can be used, as well as EWMA or Cu Sum charts.

X-bar Charts are efficient at detecting relatively large shifts in the process average, typically shifts of +-1.5 sigma or larger. The larger the subgroup, the more sensitive the chart will be to shifts, providing a Rational Subgroup can be formed. For more sensitivity to smaller process shifts, use an EWMA or Cu Sum chart.

X Bar Chart Calculations

Subgroup Average

1. Average/ Center Line

The Average, sometimes called X-Bar, is calculated for a set of n data values as:

An example of its use is as the plotted statistic in an X-Bar Chart. Here, the n is the subgroup size, and

x-bar indicates the average of the observations in the subgroup. The building of X chart follows the same principle as for the one of attribute control charts with the difference that quantitative measurements are considered for the Critical-To-Quality characteristics instead of qualitative attributes.

Samples are taken and measurements of the means ( X ) for each sample derived and plotted on the chart.

The center line (CL) is determined by averaging the s.

Where n represents the number of samples.

When dealing with subgrouped data, you can also calculate the overall average of the subgroups. It is the average of the subgroups' averages, so is sometimes called X-doublebar.

Page 139: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -21

where n is the subgroup size and m is the total number of subgroups included in the analysis.

When the subgroup size is 1, this equation simplifies to:

The next step will be to determine the Upper Control Limit ( ) and the lower Control Limit

( ). We have determined k to be equal to 3, the only remaining variable of this equation is which can be determined in several ways. One way to do it would be through the use of the

standard error estimate and another one would be the use of the mean range. There is a special relationship between the mean range and the standard deviation for normally distributed data.

Where the constant is function of n.

2. UCL , LCL (Upper and Lower Control Limit) The standard error based chart is straight forward. Based on the Central Limit theorem, the standard deviation used for the Control Limits is nothing but the standard deviation of the process divided by the square root of the number of samples and we obtain:

where x-double bar is the Grand Average and x is Process Sigma, which is calculated using the Subgroup Range or Subgroup Sigma statistic.

Process Sigma based on Range Chart The process sigma of the subgroups. Note: This is the standard deviation of the observations, NOT the standard deviation of the subgroup averages, which is

calculated by dividing the standard deviation of the observations by the square root of n (the subgroup size).

(n = constant) ,

OR

(n constant)

where: Rj is the Subgroup Range of subgroup j R-bar is the Average Range

Page 140: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -22

m is the total number of subgroups included in the analysis d2 is a function of n (available in any statistical quality control textbook) ej and fj are Burr's constants (Burr, 1969)

Sigma Chart Calculations Plotted statistic

1. The subgroup standard deviation

where xi are the observations in subgroup j, x-barj is the subgroup average for subgroup j, and n is the subgroup size.

2. Average Sigma UCL , LCL (Upper and Lower Control Limit)

where S-bar is the Average Sigma , c4 is a function of n (available in any statistical quality control textbook).

Notes: Some authors prefer to write this as:

where B3 and B4 are a function of n (available in any statistical quality control textbook).

Range Chart Calculations Plotted statistic

where x1, x2, … are the n observations in subgroup j

1. Center Line /Average Range

UCL , LCL (Upper and Lower Control Limit)

where R-bar is the Average Range , d3 is a function of n (available in any statistical quality control

textbook), and x is Process Sigma, which is calculated using the Subgroup Range . Notes: Some authors prefer to write this as:

where D3 and D4 are a function of n (available in any statistical quality control textbook). Notes: Some authors prefer to write this as:

Page 141: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -23

where R-bar is the Average Range , or

where S-bar is the Average Sigma .

2. Average Range The average of the subgroup ranges (R-bar).

(n = constant) , OR

(n constant)

where: Rj is the Subgroup Range of subgroup j m is the total number of subgroups included in the analysis sigma-x is the Process Sigma based on the Range chart d2 is a function of n (available in any statistical quality control textbook)

Note: When control limits for the X-Bar chart are defined as fixed values (such as when historical data is used to define control limits), the Average Range (R-bar) must be back calculated from these pre-defined control limits. This ensures that the control limits on the Range chart are at the same sensitivity as those on the X-Bar chart. In this case:

where d2 (available in any statistical quality control textbook) is based on the subgroup size n.

3. Moving Range

When individual (samples composed of a single item) CTQ characteristics are collected, moving range control charts can be used to monitor process quality. The variability of the process is measured in terms of the distribution of the absolute values of the difference of every two successive observations.

Let be the ith observation, the moving average range MR will be:

and the mean MR will be

The standard deviation S is obtained by dividing by the constant . Since the moving range only

involve two observations, n will be equal to two and therefore for this case, will always be 1.128.

Since is 1.128, these equations can be simplified

Page 142: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -24

STEPS IN MAKING the X-BAR and R-CHART

STEP #1 - Collect the data. It is best to have at least 100 samples. STEP #2 - Divide the data into sub groups, it is recommended the subgroups be of 4 or 5 data

points each. The number of samples is represented by the letter " n " and the number of subgroups is represented by the letter " k ". The data should be divided into subgroups in keeping with the following conditions:

1. The data obtained should be from the same grouping of products produced. 2. A sub group should not include data from a different lot or different process.

STEP #3 - Record the data on a data sheet. Design the sheet so that it is easy to compute the values of X bar and R for each sub group (see the page in the class example).

STEP #4 - Find the mean value (Xbar). Use the following formula for each subgroup:

STEP #5 - Find the range, R. Use the following formula for each subgroup.

R = X (largest value) - X (smallest value) Example 14.0 - 12.1 = 1.9

EXAMPLE: 3. It is now time for you to practice some of your learning. I have completed many of the Xbar and R values for you, however, you really should perform a few calculations to gain the experience. Using the attached Exercise Sheet, calculate the remaining Xbar and R values.

Page 143: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -25

STEP #6 - Find the overall mean, or X double bar X .Total the mean values of Xbar, for each subgroup and divide by the number of subgroups (k).

STEP #7 - Compute the average value of the range (R). Total R for all the groups and divide by the number of subgroups (k).

STEP #8 - Compute the Control Limit Lines. Use the following formulas for Xbar and R Control Charts. The coefficients for calculating the control lines are A2, D4, and D3 are located on the bottom of the Work Sheet you are presently using, and presented here:

Xbar Control Chart:Central Line (CL) = X double bar figure you calculated. Upper Control Limit (UCL) = X double bar + A2 * R bar. Lower Control Limit (LCL) = X double bar - A2 * R bar.

R Control Chart:Central Line (CL) = R bar figure you calculated. Upper Control Limit (UCL) = D4 * R bar. Lower Control Limit (LCL) = D3 * R bar.

For our Class Exercise, the details are as follows:

X Control Chart CL = X double bar = 12.94 o UCL = 12.94 + .577 * 1.35 = 13.719 Note that we are using 5 subgroups, so on the

chart n = 5, and under the A2 column, 5 = 0.577. 1.35 is the figure you calculated for R bar.

o LCL = 12.94 - .577 * 1.35 = 12.161

R Control Chart CL = R bar = 1.35 o UCL = 2.115 * 1.35 = 2.86 Note that we are using 5 subgroups, so on the chart n = 5,

and under the D4 column, 5 = 2.115. o LCL = Since our subgroups equal 5, if you look under the D3 column, there is no

calculation coefficient to apply, thus there is no LCL.

STEP #9 - Construct the Control Chart. Using graph paper or Control Chart paper, set the index so that the upper and lower control limits will be separated by 20 to 30 mm (units). Draw in the

Page 144: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -26

Control lines CL, UCL and LCL, and label them with their appropriate numerical values. It is recommended that you use a blue or black line for the CL, and a red line for the UCL and LCL. The central line is a solid line. The Upper and Lower control limits are usually drawn as broken lines.

STEP #10 - Plot the Xbar and R values as computed for each subgroup. For the Xbar values, use a dot (.), and for the R values, use an (x). Circle any points that lie outside the control limit lines so that you can distinguish them from the others. The plotted points should be about 2 to 5 mm apart. Below is what our Xbar chart looks like when plotted.

Below is what our Rbar chart looks like when plotted.

STEP #11 - Write in the necessary information. On the top center of the control charts write the Xbar and R chart, and the R Chart so that you (and others) will know which chart is which. On the upper left hand corner of the Xbar control chart, write the n value to indicate the subgroup size; in this case n = 5.

ANALYSIS OF THE CONTROL CHART

Interpreting an X-bar / R Chart Always look at the Range chart first. The control limits on the X-bar chart are derived from the average range, so if the Range chart is out of control, then the control limits on the X-bar chart are meaningless. After reviewing the Range chart, interpret the points on the X-bar chart relative to the control limits and Run Tests. Never consider the points on the X-bar chart relative to specifications, since the observations from the process vary much more than the subgroup averages.

Interpreting the Range Chart On the Range chart, look for out of control points. If there are any, then the special causes must be eliminated. Brainstorm and conduct Designed Experiments to find those process elements that contribute to sporadic changes in variation. To use the data you have, turn Auto Drop ON, which will remove the statistical bias of the out of control points by dropping them from the calculations of the average Range, Range control limits, average X-bar and X-bar control limits.

Also on the range chart, there should be more than five distinct values plotted, and no one value should appear more than 25% of the time. If there are values repeated too often, then you have inadequate

Page 145: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -27

resolution of your measurements, which will adversely affect your control limit calculations. In this case, you'll have to look at how you measure the variable, and try to measure it more precisely.

Once you've removed the effect of the out of control points from the Range chart, look at the X-bar Chart.

Interpreting the X-bar Chart After reviewing the Range chart, look for out of control points on the X-bar Chart. If there are any, then the special causes must be eliminated. Brainstorm and conduct Designed Experiments to find those process elements that contribute to sporadic changes in process location. To use the data you have, turn Auto Drop ON, which will remove the statistical bias of the out of control points by dropping them from the calculations of the average X-bar and X-bar control limits.

Look for obviously non-random behavior. Turn on the Run Tests, which apply statistical tests for trends to the plotted points.

If the process shows control relative to the statistical limits and Run Tests for a sufficient period of time (long enough to see all potential special causes), then we can analyze its capability relative to requirements. Capability is only meaningful when the process is stable, since we cannot predict the outcome of an unstable process.

Now that we know how to make a control chart, it is even more important to understand how to interpret them and realize when there is a problem. All processes have some kind of variation, and this process variation can be partitioned into two main components. First, there is natural process variation, frequently called "common cause" or system variation. These are common variations caused by machines, material and the natural flow of the process. Secondly is special cause variation, generally caused by some problem or extraordinary occurrence in the system. It is our job to work at trying to eliminate or minimize both of these types of variation. Below is an example of a few different process variations, and how to recognize a potential problem.

Types of Errors:

Control limits on a control chart are commonly drawn at 3s from the center line because 3-sigma limits are a good balance point between two types of errors:

Type I or alpha errors occur when a point falls outside the control limits even though no special cause is operating. The result is a witch-hunt for special causes and adjustment of things here and there. The tampering usually distorts a stable process as well as wasting time and energy.

Type II or beta errors occur when you miss a special cause because the chart isn't sensitive enough to detect it. In this case, you will go along unaware that the problem exists and thus unable to root it out.

All process control is vulnerable to these two types of errors. The reason that 3-sigma control limits balance the risk of error is that, for normally distributed data, data points will fall inside 3-sigma limits 99.7% of the time when a process is in control. This makes the witch hunts infrequent but still makes it likely that unusual causes of variation will be detected.

In the above chart, there are three divided sections. The first section is termed "out of statistical control" for several reasons. Notice the inconsistent plot points, and that one point is outside of the control limits. This means that a source of special cause variation is present, it needs to be analyzed and

Page 146: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -28

resolved. Having a point outside the control limits is usually the most easily detectable condition. There is almost always an associated cause that can be easily traced to some malfunction in the process.

In the second section, even though the process is now in control, it is not really a smooth flowing process. All the points lie within the control limits, and thus exhibits only common cause variations.

In the third section, you will notice that the trending is more predictable and smoother flowing. It is in this section that there is evidence of process improvement and the variation has been reduced.

Therefore, to summarize, eliminating special cause variation keeps the process in control; process improvement reduces the process variation, and moves the control limits in toward the centerline of the process. At the beginning of this process run, it was in need of adjustment as the product output was sporadic. An adjustment was made, and while the plotted points were now within the boundaries, it is still not centered around the process specification. Finally, the process was tweaked a little more and in the third section, the process seems to center around the CL.

There are a few more terms listed below that you need to become familiar with when analyzing a Xbar Chart and the process:

RUN - When several plotted points line up consecutively on one side of a Central Line (CL), whether it is located above or below the CL, it is called a "run". If there are 7 points in a row on one side of the CL, there is an abnormality in the process and it requires an adjustment.

TREND - If there is a continued rise or fall in a series of points (like an upward or downward slant), it is considered a "trend" and usually indicates a process is drifting out of control. This usually requires a machine adjustment.

PERIODICITY - If the plotted points show the same pattern of change over equal intervals, it is called "periodicity". It looks much like a uniform roller coaster of the same size ups and downs around the centerline. This process should be watched closely as something is causing a defined uniform drift to both sides of the centerline.

HUGGING - When the points on the control chart seem to stick close to the center line or to a control limit line, it is called "hugging of the control line". This usually indicates that a different type of data, or data from different factors (or lines) have been mixed into the sub groupings. To determine if you are experiencing "hugging" of the control line, perform the following exercise. Draw a line equal distance between the centerline and the upper control limit. Then draw another line equal distance between the center line and the lower control limit. If the points remain inside of these new lines, there is an abnormality, and the process needs closer analysis.

Example: 4. Below the data as follow:

The following steps are used to contruct the control limits based on the moving range of successive data points, hence the name XmR (X is the individual values, mR is the moving range). This is also called an "Individuals and Moving Range Chart".

Steps: a) List the data in its time series order. b) Calculate the average. This becomes the center line of the control chart. c) Calculate the absolute value differences (ranges) between each set of points.

There will be one less range than there are number of data points. d) Determine the median range. List the ranges from highest to lowest and find

the middle of the list. e) Multiply range by 3.14. This determines the distance of the control limits from

the center line. f) Calculate the control limits: Add the results of Step 2 to Step 5 to get the

Upper Control Limit (UCL). Subtract Step 5 from Step 2 to get the Lower Control Limit (LCL).

g) Plot the data in time series order and draw a solid center line at X, the average.

h) Draw dashed lines to indicate the control limits.

Answer:

Page 147: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -29

1. The data have been ordered and the ranges calculated, as given in the table to the left. 2. Calculate the average, X, which will be the center line on the control chart. X = 233 / 25 = 9.323. Determine the median range. This step requires ordering the ranges from smallest to largest

and finding the value(s) in the middle. Ordered ranges: 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 3, 3, 3, 3, 5, 5, 5, 6, 6, 7, 8, 8, 14, 16. R = 3

4. Multiply the range by 3.14. 3 x 3.14 = 9.425. Calculate the upper control limit (UCL) by adding the distance obtained in step 4 to the average

calculated in step 1. UCL = 9.32 + 9.42 = 18.746. Calculate the lower control limit (LCL) by subtracting the distance obtained in step 4 from the

average calculated in step 1. LCL = 9.32 - 9.42 = -0.10 If the data collected cannot take on values less than zero, the lower control limit is adjusted to this minimum value, as in this case. LCL = 0

7. The XmR control chart can now be plotted.

Analysis:Given that the upper control limit is 18.74, any point larger than 18 would be an outlier and, therefore, signal a possible special cause of variation. Data point #10 has a value of 22 and should be investigated to discover why the system was out of statistical control at that point.

EXAMPLE: 5.

Consider, that we have a precision made piece coming off of an assembly line. We wish to see if the process resulting in the objects diameter (say for example the molding process) is in control. Let's say you are charting waiting times in an urgent care center. The chart may indicate a system in control, hovering very tightly and consistently around a mean wait of forty minutes. This is not acceptable! The proper way to use a control chart in this instance is to verify that your system remains in control as you make operational changes to reduce the waiting time. You may, for instance, hire two more people. Let's say you do this, and waiting time clearly drops, but now your system is out of control. What do you do then?

Take a sample of FIVE objects and measure each.

We are not going to pursue this analysis to a conclusion here. The point is that a manager must be asking two questions- first, is my system in control? Second, is the level of performance acceptable or better? You use the chart to record, monitor and evaluate performance as you change or maintain a system. Your operational goal will be whatever standard management has set, but your process goal will always be a system in control.

Page 148: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -30

Note: The sample size is traditionally 5, as in the example above. Control charts can and do use sample sizes other than 5, but the control limit factors change (here, .577 for the X chart). These factors, by the way, were derived by statisticians and will not be explained further here.

a) Calculate the average of the five. This is one data point for the X chart, called "Xbar". We will represent X Bar as X' (X prime)

b) Calculate the range (largest minus smallest) of the five. This is one data point for the R Chart, called simply "R".

c) Repeat steps 1 and 2 twenty (20) times. You will have 20 "X bar" points and 20 "R" points. d) Calculate the average of the 20 X bar points- yes, the average of the averages. This value is "X

Double Bar", we will show it as X" (X double prime) It is the centerline of the X Chart. e) Calculate the average of the twenty R points. This is called "R Bar", R prime for us (R'). It is the

centerline of the R chart, and also is used in calculating the control limits for both the X chart and the R chart.

f) Calculate the upper and lower control limits for the X Chart, using the following equations:

Plot the original X Bar and R points, twenty of each, on the respective X and R control charts. Identify points which fall outside of the control limits. It is these points which are due to unanticipated or unacceptable causes. These are the points that require management attention. In general, if no points are outside of the control limits, the system is in control and should not be interfered with. Because the results are in control, however, does not mean they are acceptable.

Page 149: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -31

EXAMPLE: 6

A quality control inspector at the Cocoa Fizz soft drink company has taken twenty-five samples with four observations each of the volume of bottles filled. The data and the computed means are shown in the table. If the standard deviation of the bottling operation is 0.14 ounces, use this information to develop control limits of three standard deviations of the bottling operation.

The solution as below:

a. The center line of he control data is the average of the samples:

b. The control limits are:

Page 150: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -32

c. The resulting control chart is:

A quality at Cocoa Fizz is using the data to develop control limits. If the average range for twenty-five samples is 0.29 ounces (computed as 7.17/25) and the average mean of the observation is 15.95 ounces, develop three-sigma control limits for the bottling operation.

He value of A2 is obtained from Table for n=4 is A2=0.73. This leads to the following limits:

The quality control inspector at Cocoa Fizz would like to develop a range () chart in order to monitoring volume dispersion in the bottling process. By using the data above to develop control limits for the sample range, then the range limits as follow as:

Page 151: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -33

We can see that mean and range chats are used to monitor different variables. The mean or X-Bar chart measures the central tendency of the process. Since both variables are important, it makes sense to monitor a process using both mean and range charts. It is possible to have a shift in the mean of he product but not a change in the dispersion. For example, at the Cocoa Fizz bottling plant the machine setting can shift so that the average bottle filled contains not 16.0 ounces, but 15.9 ounces of liquid. The dispersion could be the same, and this shift would be detected by an x-bar chart but not by a range chart. This is shown in part (a) of figure below. On the other hand, there could be a shift in the dispersion of the product without a change in the mean. Cocoa Fizz may still be producing bottles with an average fill of 16.0 ounces. However, the dispersion of the product may have increased, as shown in part (b) of figure as below. This condition would be detected by a range chart but not by an x-bar chart. Because a shift in either the men or the range means that the process is out of control, it is important to use both charts to monitor the process.

EXAMPLE: 7

A quality control inspector at Chileupeung Sdn.Bhd. monitor the line production process of pressing machineby measuring the length of sim part with the data as table below. Calculate and graph the process!

Page 152: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -34

The calculation of the data as follow :

a. The construction of X and R on this process as follow:

Page 153: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -35

Page 154: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -36

EXAMPLE: 8

Oon pisan as an inspector in the manufacturer of circuit boards for personal computers found that the various components are to be mounted on each board and the boards eventually slipped into slots in a chassis. The boards’ overall length is crucial to assure a proper fit, and this dimension has been targeted as an important item to be stabilized.

R = 0.569/25 = 0.02276 UCL (R) = (2.114) (0.02276) = 0.048115 and LCL (R) = (0) (0.02276) = 0.00000

R Zone Boundaries :

a. Between lower zones A and B = R -2 d3 ( R /d2) = R (1- 2d3/d2) = 0.02276 (1-2(0.864)/2.326) = 0.005851

b. Between lower zones B and C = R - d3 ( R /d2) = R (1- d3/d2) = 0.02276 (1-(0.864)/2.326) = 0.014306

a. Between upper zones A and B = R + 2 d3 ( R /d2) = R (1+ 2d3/d2) = 0.02276 (1+2(0.864)/2.326) = 0.039669

b. Between upper zones B and C = R + d3 ( R /d2) = R (1+ d3/d2) = 0.02276 (1+(0.864)/2.326) = 0.031214

The Center line :

X = 125.61/ 25 = 5.00244

UCL ( X ) = 5.00244 + 0.577 (0.02276) = 5.015573

and LCL ( X ) = 5.00244 - 0.577 (0.02276) = 4.989307

X Zone Boundaries :

a. Between lower zones A and B = X - (2/3) A2 R = 5.015573 - ((2/3) (0.577) (0.02276)) = 5.006818

b. Between lower zones B and C = X - (1/3) A2 R = 5.015573 - ((1/3) (0.577) (0.02276)) = 5.011195

c. Between upper zones A and B = X + (2/3) A2 R = 5.015573 + ((2/3) (0.577) (0.02276)) = 5.024328

d. Between upper zones B and C = X + (1/3) A2 R = 5.015573 + ((1/3) (0.577) (0.02276)) = 5.019951

Page 155: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -37

EXAMPLE: 9

Twelve additional samples of curetime data from the molding process were collected from an actual production run. The data from these new samples are shown in table 2. Save the raw data for this table and try to draw the control charts. Compare the results with those given here

The X bar and the R charts are drawn with the new data with the same control limits established before. They are shown below

Establish the revised central line and control limits If an analysis of the preliminary data show s good control, then X and R can be considered as representative of the process and these become the standard values, X0 and R0. Good control can be briefly described as that which has no out-of-control points, no long runs or either side of the central line, and no unusual patterns of variation.

Page 156: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -38

There are two techniques used to discard data. If either the X and R value of a subgroup is out control and has assignable cause, both are discarded, or only the out-of-control value of a subgroup is discarded.

Formula:

Where = discarded subgroup averages = number of discarded subgroups = discarded subgroup range

EXAMPLE: 10Calculate for a new X are based on discarding the X values of 6.65 and 6.51 for subgroups 4 and 20, respectively. Calculations for a new R are based on discarding the R value of 0.30 for subgroup 18.

d

d

new

gg

XXX

d

d

gg

RRR

Page 157: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -39

These new values of X and R are used to establish the standard values of X0, R0, and 0. Thus,

Where d2= a factor from Table for estimating c from R0. the standard or reference values can be considered to be the best estimate with the data available. As more data become available, better estimates or more confidence in the existing standard values are obtained.

Using the standard values, the central lines and the 3 control limits for actual operations are obtained using the formulas

Where A, D1 and D2 are the factors from Table for obtaining the 3 control limits from X0 and 0.

EXAMPLE: 11

Form table B and for a subgroup size of 4, the factors are A = 1.500, D2 = 2.059, D1 = 4.698. Calculations to determine X0 and 0 using the data previously given are

Thus, the control limits are

Page 158: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -40

EXERCISE: 1

A new machine that is used to fill the bottles of shampoo with a specific weight of the product has been in trial production. The table gives the data which is collected by the operator in every 20 minutes:

EXERCISE: 2

The Get-Well Hospital has completed a quality improvement project on the time to admit a patient using

X and R Charts. They now wish to monitor the activity using median and range charts. Determine the central line and control limits with the latest data in minutes as given below:

EXERCISE: 3

The manufacturing engineering engineer take sample by trial production of pressing machine to find the performance of mold that pressing the ceramic powder for ceramic IC ROM with the data as shown in below. Determine the central line and control limits with of the pressing machine performance.

Page 159: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -41

EXERCISE: 4

The data taken from machine that produce the washer as below:

Determine the central line and control limits with of the machine performance and draw the graphic!

Page 160: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmo\bi e

Control Chart (VARIABLE) -42

Table:

Page 161: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Control Chart (ATTRIBUTE) -1

ATTRIBUTE CHARTS

Attribute Charts are a set of control charts specifically designed for Attributes data. Attribute charts monitor the process location and variation over time in a single chart.

The family of Attribute Charts include the:

Np-Chart: for monitoring the number of times a condition occurs, relative to a constant sample size, when each sample can either have this condition, or not have this condition p-Chart: for monitoring the percent of samples having the condition, relative to either a fixed or varying sample size, when each sample can either have this condition, or not have this condition c-Chart: for monitoring the number of times a condition occurs, relative to a constant sample size, when each sample can have more than one instance of the condition. u-Chart: for monitoring the percent of samples having the condition, relative to either a fixed or varying sample size, when each sample can have more than one instance of the condition.

When to Use an Attribute Chart

Only Attributes data can be applied to an Attributes control chart. To see the differences between various attribute charts, let's consider an example of the errors in an accounting process, where each month we process a certain number of transactions.

The Np-Chart monitors the number of times a condition occurs, relative to a constant sample size, when each sample can either have this condition, or not have this condition. For our example, we would sample a set number of transactions each month from all the transactions that occurred, and from this sample count the number of transactions that had one or more errors. We would then track on the control chart the number of transactions with errors per month.

The p-Chart monitors the percent of samples having the condition, relative to either a fixed or varying sample size, when each sample can either have this condition, or not have this condition. For our example, we might choose to look at all the transactions in the month (since that would vary from month to month), or a set number of samples, whichever we prefer. From this sample, we would count the number of transactions that had one or more errors. We would then track on the control chart the percent of transactions with errors per month.

The c-Chart monitors the number of times a condition occurs, relative to a constant sample size. In this case, a given sample can have more than one instance of the condition, in which case we count all the times it occurs in the sample. For our example, we would sample a set number of transactions each month from all the transactions that occurred, and from this sample count the total number of errors in all the transactions. We would then track on the control chart the number of errors in all the sampled transactions per month.

The u-Chart monitors the percent of samples having the condition, relative to either a fixed or varying sample size. In this case, a given sample can have more than one instance of the condition, in which case we count all the times it occurs in the sample. For our example, we might choose to look at all the transactions in the month (since that would vary month to month), or a set number of samples, whichever we prefer. From this sample, we count the total number of errors in all the transactions. We would then track on the control chart the number of errors per transactions per month.

Interpreting an Attribute ChartEach chart includes statistically determined upper and lower control limits, indicating the bounds of expected process behavior. The fluctuation of the points between the control limits is due to the variation that is intrinsic (built in) to the process. We say that this variation is due to "common causes" that influence the process. Any points outside the control limits can be attributed to a "special cause," implying a shift in the process. When a process is influenced by only common causes, then it is stable, and can be predicted. Thus, a key value of the control chart is to identify the occurrence of special causes, so that they can be removed, with a reduction in overall process variation. Then, the process can be further improved by either relocating the process to an optimal average level, or decreasing the variation due to common causes.

Page 162: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Control Chart (ATTRIBUTE) -2

Attribute charts are fairly simple to interpret: merely look for out of control points. If there are any, then the special causes must be eliminated. Brainstorm and conduct Designed Experiments to find those process elements that contribute to sporadic changes in process location. To use the data you have, turn Auto Drop ON, which will remove the statistical bias of the out of control points by dropping them from the calculations of the average and control limits.

Remember that the variation within control limits is due to the inherent variation in sampling from the process. (Think of Deming's Red Bead experiment: the proportion of red beads never changed in the bucket, yet each sample had a varying count of red beads). The bottom line is: React first to special cause variation. Once the process is in statistical control, then work to reduce variation and improve the location of the process through fundamental changes to the system.

P-CHART Calculations

P-charts are used to measure the proportion hat s defective in a sample. The computation of the enter line s well as the upper and lower control limits is similar to the computation for the other kinds of

control charts. The enter line is computed as the average proportion defective in the population, p . This

is obtained by taking a number of samples of observation at random and computing the average value of p across all samples. The p -chart is used when dealing with ratios, proportions or percentages of conforming or non conforming parts in a given sample. A good example for a p -chart is the inspection of products on a production line. They are either conforming or nonconforming. The probability distribution used in this context is the Binomial distribution with p representing the non-conforming proportion and q (which is equal to1 – p ) representing the proportion of conforming items. Since the products are only inspected once, the experiments are independent from one another.

The first step when creating a p -chart is to calculate the proportion of nonconformity for each sample.

Where m represents the number of nonconforming items, b is the number of items in the sample and p is the proportion of nonconformity

Plotted statistic. The percent of items in the sample meeting the criteria of interest

where nj is the sample size (number of units) of group j.

Where is the mean proportion, k is the number of samples audited and is the kth proportion obtained.

1. Center Line

where nj is the sample size (number of units) of group j, and m is the number of groups included in the analysis.

Page 163: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Control Chart (ATTRIBUTE) -3

2. UCL , LCL (Upper and Lower Control Limit)

Upper Control Limits Lower Control Limits

where nj is the sample size (number of units) of group j, p-bar ( p ) is the average percent or represents

the center line.

EXAMPLE : 1

During the first shift, 450 inspections are made of book-of-the month shipments and 5 nonconforming units are found. Production during the shift was 15,000 units. What is the fraction nonconforming?

p = 5 / 450 = 0.011

EXAMPLE : 2

Belangbetong Pte.Ltd. has been making the printing for milk cans for a number of days. They use p charts to keep track of the number of nonconforming cans the are created each time a batch of cans is run. The data as below:

Page 164: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Control Chart (ATTRIBUTE) -4

EXAMPLE : 3

Inspection results of washer parts non-conformance taken from production in June as follow:

Page 165: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Control Chart (ATTRIBUTE) -5

EXAMPLE : 4

Below is the Inspection data taken by QA Inspetion against the Hair Dryer Blower Motor in production line as follow as:

Out of control

Page 166: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Control Chart (ATTRIBUTE) -6

From figure above the subgroup 19 is above the upper control limit. If we discard the subgroup 19, then the calculation for the new graphic as follow:

np - npd

p new = -----------------

n - nd

Where npd = number nonconforming in the disarded subgroups nd = number inspected in the discarded subgroups

p o = p new

So,

=0

Page 167: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Control Chart (ATTRIBUTE) -7

EXAMPLE : 5

Below is the Inspection Lightbulb Nonconforming data taken from production line of PT. Terang Gemilang:

Page 168: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Control Chart (ATTRIBUTE) -8

USING AVERAGE SUBGROUP SIZE

There are three different ways around the problem of variable subgroup sizes:

to compute the average subgroup size and to use this value for the control chart.

to compute new control limits and class boundaries for every subgroup based on that subgroup’s size. To compute both a wide and a narrow set of control limits based upon the smallest and largest possible values for n.

EXAMPLE : 6

Consider the case of a small maufacurer of low-tension electrical insulators. Each day during a one-month period the manufacturer inspects the production of a agiven shift; the number inspected varies somewhat. Based on carefully laid out operational definitions, some of the production is deemed nonconforming and is downgraded.

Centerline (p) = p = 594/9769 = 0.061

Average value of n = 9769/25 = 390.76

Page 169: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Control Chart (ATTRIBUTE) -9

Boundary between upper zones A and B Boundary between upper zones B and C

Boundary between lower zones A and B Boundary between lower zones B and C

USING VARYING CONTROL LIMITSWhen samples sizes do vary by more than 25%, we may either compute two sets of control limits or calculte new zone boundaries and control limits for each subgroup. Although the calculation required for new zone boundaries for each subgroup are more tedious, the technique is more sensitive.

EXAMPLE : 7

Centerline (p) = p = 2569/6421 = 0.400093

Page 170: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Control Chart (ATTRIBUTE) -10

The process indicates many instances of a lack of control. Fully 25% of the subgroup proportions are out of control, and the data seems to be behaving in an extremely erratic pattern. Days 19,18,13,9, and 7 are all beyond the control limits. Day 5 also indicates a lack of control because it’s the second of three consecutive points falling in zone C or beyond on the same side of the centerline.

Boundary between upper zones A and B Boundary between upper zones B and C

Page 171: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Control Chart (ATTRIBUTE) -11

Boundary between lower zones A and B Boundary between lower zones B and C

EXAMPLE : 8

Preliminary data of computer modem final test and control limits for each group

Page 172: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Control Chart (ATTRIBUTE) -12

Since all these out-of-control points have assignable causes, they are discarded. A new p is obtained

as follows:

Since this value represents the best estimate of the standard or reference value of the fraction nonconforming, po = 0.19

The fraction nonconforming, po, is used to calculate upper and lower control limits for the next period, which is no 26 and so on. However, the limits cannot be calculated until the end of each time picked up, when the subgroup size, n , is known. This mean that the control limits are never known ahead of time.

np 31 P26 = -------- = ------- = 0.020 n 1535

po (1 – po) UCL 26 = po + 3 --------------------- n26

0.019 (1 – 0.019) = 0.019 + 3 ------------------------ 1535

= 0.029 0.019 (1-0.019) LCL 26 = 0.019 - 3 ----------------------- 1535 = 0.009

Page 173: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Control Chart (ATTRIBUTE) -13

NOTE:p Is the proportion9 fraction) nonconforming in a single subgroup. T is posted to the average but is not used to calculate the

control limits.

pIs the average proportion (fraction) nonconforming of any subgroups. It is the sum of the number nonconforming divided by the sum of the number inspected and is used to calculate the trial control limits.

po Is the standard or reference value of the proportion (fraction) nonconforming based on the best estimate of p . It is used

to calculate the revised control limits. It can be specified as a desired value.

Is the population proportion (fraction) nonconforming. When this value is known, it can be used to calculate the limits,

since po =

EXERCISE : 1

Number of defectives in 30 SubGroups of size 50 as below:

a. Calculate the string of successive proportions of defective.

b. Calculate the centerline and control limits for the p chart.

c. Draw the p chart d. Is the process stable? How do you know?

EXERCISE : 2

Steel pails are anufactured at a high rate. Periodic samples of 50 pails are seleced from the process. Results of that sampling are:

a. Calculate the string of successive proportions of defective pails.

b. Calculate the centerline and control limits for the p chart.

c. Draw the p chart d. Is the process stable? How do you know?

Page 174: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Control Chart (ATTRIBUTE) -14

EXERCISE : 3

Determine the trial central line and control limits of a p chart using the following data, which are for the payment of dental insurance claims. Plot the values on graph paper and determine if the process is stable. If there are any out-of-control points, assume an assignable cause and determine the revised central line and control limits.

EXERCISE : 4

A company has designed a production line for a new model of computer speakers. The design of the speakers is such that if they do not operate correctly after being built they are scrapped s opposed to reworked due to the extremely high cost of diagnosis and rework. Management is interested in getting an idea of the scrap associated with the process. The quality department decides to take sample of fifteen sets of speakers per hour for a 24-hour period to determine the percent of scarp produced by the line. The data are shown in table as follow as:

a. Construct the p chart. b. Referring the patterns used for attributes control charts.

Page 175: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Control Chart (ATTRIBUTE) -15

Np-CHART Calculations The np chart is one of the easiest to build. While the p -chart tracks the proportion of non-conformities per sample, the np chart plots the number of non-conforming items per sample.

The audit process of the samples follows a binomial distribution, in other words, the expected outcome is “good” or “bad”, and therefore the mean number of success is np.The control limits for an np chart are as follow:

1. Center Line

where m is the number of groups included in the analysis.

2. UCL , LCL (Upper and Lower Control Limit)

Upper Control Limits Lower Control Limits

where n is the sample size, np-bar is the Average count , and p-bar is calculated as follows:

EXAMPLE : 1

Bricks are manufactured for use in housing construction. The bricks are produced and then put on pallets for shipment. Samples of fifty are taken each hour and visually inspected for cracks, breaking, and other flaws that would result in construction problems. The data for one three-shift operation days is shown below:

np = (50) (35/1200) = 1.458

The upper and lower control limits are computed as follow:

Page 176: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Control Chart (ATTRIBUTE) -16

EXAMPLE : 2

Pcc INC. receives shipments of circuits boards from its suppliers by the truckload. They keep track of the number of damaged, incomplete, or inoperative circuit boards found when the truck is unloaded. This information helps them make decisions about which suppliers to use in the future. As shown in table below:

Centerline n p = 73/30 = 3.65

The control limits are found using the following formulas:

p = 73/1000 = 0.073

UCLnp = n p + 3 )1( ppn = 3.65 + 3 )073.01(65.3 = 9.17

LCLnp = n p - 3 )1( ppn = 3.65 - 3 )073.01(65.3 = - 1.87 = 0

Page 177: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Control Chart (ATTRIBUTE) -17

EXAMPLE : 2

The centerline is the overall average number of non conforming (or conforming) items found in each subgroup of the data. For the ceramic tile importer, there are a total of 183 cracked or broken tiles in the 30 subgroups examined.

a. Centerline n p = 100 (3000

183) = 6.100 or Centerline n p =

30

183 = 6.100

p = 0.061

b. UCLnp = n p + 3 )1( ppn = (100) (0.061) + 3 )061.01)(061.0)(100( = 13.28

or

UCLnp = n p + 3 )1( ppn = 6.1 + 3 )061.01(1.6 = 13.28

c. LCLnp = n p + 3 )1( ppn = (100) (0.061) + 3 )061.01)(061.0)(100( = 13.28

or

LCLnp = n p + 3 )1( ppn = 6.1 - 3 )061.01(1.6 = -1.080 = 0

U CHART CalculationsOne of the premises for a c -chart was that the sample sizes had to be the same. The sample sizes can vary when the u -chart is being used to monitor the quality of the production process and the u -chart does not require any limit to the number of potential defects. Furthermore, for a p -chart or an np -chart the number of non-conformances cannot exceed the number of items on a sample but for a u -chart, it is conceivable since what is being addressed is not the number of defective items but the number of defects on the sample.

The first step in creating a u -chart is to calculate the number of defects per unit for each sample.

Page 178: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Control Chart (ATTRIBUTE) -18

Where u represents the average defect per sample, c is the total number of defects and n is the sample size.

Once all the averages are determined, a distribution of the means is created and the next step will be to find the mean of the distribution, in other word, the grand mean.

Where k is the number of samples

The control limits are determined based on and the mean of the samples n.The average count of occurrences of a criteria of interest in sample of items

where nj is the sample size (number of units) of group j.

1. Center Line

where nj is the sample size (number of units) of group j, and m is the number of groups included in the analysis.

2. UCL , LCL (Upper and Lower Control Limit)

where nj is the sample size (number of units) of group j and u-bar is the Average percent.

EXAMPLE : 1

The manufacture of a certain grade of plastic produced in rolls, within samples taken 5 times daily. Because of the nature of the process, the square footage of each sample varies from inspection lot to inspection lot. Centerline (u) = Average number of defects/

100sq.qt = u = 90.47

120 = 2.51

Upper control limit : LCL (u) = u + 3ni

u

= 2.51+3ni

51.2

Lower control limit : LCL (u) = u - 3ni

u

= 2.51-3ni

51.2

Page 179: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Control Chart (ATTRIBUTE) -19

0.00

2.00

4.00

6.00

8.00

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

Number of

Inspection Units

Upper Control

Limits

Low er Control

Limits

Page 180: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Control Chart (ATTRIBUTE) -20

EXAMPLE : 2

The number of non conformities in carpets is determined for 20 samples in mm2, but the amount of

carpet inspected for each sample varies. Results of the inspection are shown in table below:

Centerline (u) = Average number of defects/ 100sq.qt = u = 41

192 = 4.683

Upper control limit : LCL (u) = u + 3ni

u

= 4.683+3ni

683.4

Lower control limit : LCL (u) = u - 3ni

u

= 4.683-3ni

683.4

Page 181: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Control Chart (ATTRIBUTE) -21

EXERCISE :1

The number of typographical errors is counted over a certain umber of pages for each sample. The data for 25 samples as shown below. The number of pages used for each sample is not fixed. Construct a control chart for the umber of typographical errors per page. Revise the limits, assuming special causes for point that are out of control.

EXERCISE :2

The number of imperfections in bond paper produced by a paper mil is observed over a period o several days. The table below shows the area inspected and the umber of imperfections for 25 samples.

0

3

6

9

12

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Non Conformities

Upper Control Limits

Lower Control Limits

Centerline

Page 182: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Control Chart (ATTRIBUTE) -22

1. Construct a control chart for he number of imperfections per square meter. Revise the limits if necessary, assuming special causes for the out-of-control points.

2. If we want to control the number of imperfections per 100m

2 , how would this affect the control chart? What would

the control limits be? I terms of decision making, would be there difference between this problem?

C CHART Calculations

The c -chart monitors the process variations due to the fluctuations of defects per item or group of items. The c -chart is useful for the process engineer to know not just how many items are not conforming but how many defects there are per item. Knowing how many defects there are on a given part produced on a line might in some cases be as important as knowing how many parts are defective. Here, non-conformance must be distinguished from defective items since there can be several non-conformances on a single defective item.

The probability for a nonconformance to be found on an item, in this case follows a Poisson distribution. If the sample size does not change and the defects on the items are fairly easy to count, the c -chart becomes an effective tool to monitor the quality of the production process.

If C is the average nonconformity, the UCL and the LCL limits will be given as follow for a k -sigma control chart: The count of occurrences of a criteria of interest in a sample of items

1. Center Line

where m is the number of groups included in the analysis.

Page 183: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Control Chart (ATTRIBUTE) -23

2. UCL , LCL (Upper and Lower Control Limit)

where n is the sample size and c-bar is the average count.

EXAMPLE : 1

Samples of fabric from a textile mill, each 100m2 , are selected, and the number of occurrences of

foreign matter are recorded. Data for 25 samples are shown in table below. Construct a c-chart for the number of nonconformities.

Centerline (c) = c =25

189= 7.560

Upper control limit : UCL (c) = c + 3 c = 7.560 + 3 560.7 = 15.809

Lower control limit : LCL (c) = c - 3 c = 7.560 - 3 560.7 = -0.689 = 0

EXAMPLE : 2

The number of paints blemishes on automobile bodies is observed for 30 samples. Each samples consists of randomly selecting 5 automobiles of a certain make and style. Construct a chart for the number of paint blemishes. Assuming special causes for the out-of-control points, revise these limits accordingly.

Centerline (c) = c =30

182= 6.07

Upper control limit : UCL (c) = c + 3 c = 6.07 + 3 07.6 = 13.46

Lower control limit : LCL (c) = c - 3 c = 6.07 - 3 07.6 = -1.32 = 0

Page 184: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Control Chart (ATTRIBUTE) -24

Centerline (c) = c =28

153= 5.46 (sample no.17 and 20 strike out)

Upper control limit : UCL (c) = c + 3 c = 5.46 + 3 46.5 = 12.48

Lower control limit : LCL (c) = c - 3 c = 5.46 - 3 46.5 = -1.55 = 0

EXERCISE : 1

The number of scratch marks for a particular piece of furniture is recorded for samples of size 10. The results are show in table below.

1. Construct a chart for the number of scratch marks. Revise the control limits, assuming special causes for the out-of-control points.

2. Suppose that management sets a goal of 4 scratch marks on average per 10 pieces. Set up an appropriate control, whether the process is capable of meeting this standard.

Page 185: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Control Chart (ATTRIBUTE) -25

Summary:

Binomial Distribution

(proportion of Non Conforming)

Poisson Distribution

(area opportunity)

Samples size VARIES

p

Non conforming items

U

The number of nonconformities per unit

Samples size CONSTANT

np

Non conforming items

C

The number of nonconformities in samples

Page 186: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

NORMAL DISTRIBUTION-1

The NORMAL DISTRIBUTION

The normal distribution is one of the many probability distributions that a continuous random variable can possess. The normal distribution is the most important and most widely used of all the probability distributions. A large number of phenomena in the real world are normally distributed either exactly or approximately. The continuous random variables representing the heights and weights of people, scores on an examination, weights of packages (e.g., cereal boxes, boxes of cookies), amount of milk in a gallon, life of an item (such as a light bulb, or a television set), and the time taken to complete a certain job have all been observed to have a (approximate) normal distribution.

The normal probability distribution or the normal curve is given by a bell-shape (symmetric) curve. Such a curve is shown in Figure 4-3. It has a mean of µ and a standard deviation of . A continuous random variable X that has a normal distribution is called a normal random variable. Note that not all bell-shaped curves represent a normal distribution curve. Only a specific kind of bell-shaped curve represents a normal curve.

Normal Probability Distribution

A normal distribution possesses the following three characteristics.

1. The total area under a normal distribution curve is 1.0 or 100%, as shown in Figure 1

Figure 1 Total area under a normal curve. The shaded area is 1.0 or 100%

2. A normal distribution curve is symmetric about the mean, as shown in Figure 2.

Consequently, 1/2 of the total area under a normal distribution curve lies on the left side of the mean and 1/2 lies on the right side of the mean.

Figure 2 The curve is symmetric about the mean.

3. The tails of a normal distribution curve extend indefinitely in both directions without touching or crossing the horizontal axis. Although a normal distribution curve never meets the horizontal axis, beyond the points represented by -3, and +3, it becomes so close to this

Page 187: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

NORMAL DISTRIBUTION-2

axis that the area under the curve beyond these points in both directions can be taken as virtually zero. These areas are shown in Figure 3.

Figure 3 The two tails of the curve extend indefinitely.

The mean, µ, and the standard deviation, , are the parameters of the normal distribution. Given the values of these two parameters, we can find the area under a normal distribution curve for any interval. Remember, there is not just one normal distribution curve but rather a family of normal distribution curves. Each different set of values of µ and s gives a different normal distribution. The value of µ determines the center of a normal distribution on the horizontal axis and the value of sgives the spread of the normal distribution curve. The two normal distribution curves drawn in Figure 4 have the same mean but different standard deviations. By contrast, the two normal distribution curves in Figure 5 have different means but the same standard deviation.

Figure 4 Two normal distribution curves with the same mean but different standard deviations.

Figure 5 Two normal distribution curves with the same standard deviation but different means.

Page 188: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

NORMAL DISTRIBUTION-3

The Standard Normal Distribution

The standard normal distribution is a special case of the normal distribution. For the standard normal distribution, the value of the mean is equal to zero, and the value of the standard deviation is equal to 1.

Figure 6 displays the standard normal distribution curve. The random variable that possesses the standard normal distribution is denoted by z. In other words, the units for the standard normal distribution curve are denoted by z and are called the z values or z scores. They are also called standard units or standard scores.

There are four areas on a standard normal curve that all introductory statistics students should know. The first is that the total area below 0.0 is .50, as the standard normal curve is symmetrical like all normal curves. This result generalizes to all normal curves in that the total area below the value of µ is .50 on any member of the family of normal curves. See figure 7.

Figure 6 The standard normal distribution curve z = 0 and below.

The second area that should be memorized is between z scores of -1.00 and +1.00. It is .68 or 68%. See Figure 7.

Figure 7 The standard normal distribution curve z = + 1.

The third area is between z scores of -2.00 and +2.00 and is .95 or 95%. See figure 8.

Figure 8 The standard normal distribution curve z = + 2

Page 189: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

NORMAL DISTRIBUTION-4

The fourth area is between z scores of –3.00 and +3.00 and is .997 or 99.7 %. See figure 9, but ignore the typo for µ + 3 = 99.97.

Figure 9 The standard normal distribution curve z = + 3

z Values or z Scores

The units marked on the horizontal axis of the standard normal curve are denoted by z and arecalled the z values or z scores. A specific value of z gives the distance between the mean and the point represented by z in terms of the standard deviation.

In Figure 8, the horizontal axis is labeled z. The z values on the right side of the mean are positive and those on the left side are negative. The z values for a point on the horizontal axis gives the distance between the mean and that point in terms of the standard deviation. For example, a point with a value of z = 2 is two standard deviations to the right of the mean. Similarly, a point with a value of z = -2 is two standard deviations to the left of the mean.

The standard normal distribution table, Area Under the Curve, lists the areas under the standard normal curve between z = 0 and the values of z from 0.00 to 3.50. To read the standard normal distribution table, we always start at z = 0, which represents the mean of the standard normal distribution. We learned earlier that the total area under a normal distribution curve is 1.0. We also learned that, because of symmetry, the area on either side of the mean is .5. This is also shown in Figure 6.

Remember: Although the values of z on the left side of the mean are negative, the area under the curve is always positive.

The area under the standard normal curve between any two points can he interpreted as the probability that z assumes a value within that interval.

Example1: Find the area under the standard normal curve between z = 0 and = 1.95.

Solution: To find the required area under the standard normal curve, we locate 1.95 in the standard normal distribution table, Area Under the Curve. The entry gives the area under the standard normal curve for z = 1.95 as 0.9744. Next, we find the area under the standard curve for z = 0 as 0.50. We knew this without referring to the table since by definition 50% of the area under the curve lies on both sides of the standard normal curve, Consequently, the area under the standard normal curve between z = 0 and z = 1.95 is 0.9744 - 0.50 = .4744. This area is shown in Figure 10. (It is always helpful to sketch the curve and mark the area we are determining.)

Page 190: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

NORMAL DISTRIBUTION-5

Figure 10 Area between z = 0 and z = 1.95.

Example 2 Find the area from z = -1.56 to z = 2.31 under the standardized curve

Solution: First find the area below the curve for a z value of 2.31, which from the table, Area Under the Curve is 0.9896. Next, find the area below –1.56, which is 0.0594. Therefore, the area from z = -1.56 to z = 2.31 is 0.9896 - 0.0594 = 0.9302. The area can also be found by finding the distance from the mean of 0 for both z values. z = -1.56 is 0.4406 from 0, and z = 2.31 is 0.4896 from 0. See Figure 11. Either method produces the same result, so the method you choose is up to you and the table you have to work from.

Figure 11 Area between z = -1.56 and z = 2.31

1. Standardizing a Normal Distribution

As was shown in the previous section, the table, Area Under the Curve, can be used to find areas under the standard normal curve. However, in real-world applications, a (continuous) random variable may have a normal distribution with values of the mean and standard deviation different from 0 and 1, respectively. The first step, in such a case, is to convert the given normal distribution to the standard normal distribution. This procedure is called standardizing a normal distribution. Theunits of a normal distribution (which is not the standard normal distribution) are denoted by X. We know from the last section that units of the standard normal distribution are denoted by z.

For a normal random variable X, a particular value of X can be converted to a z value by using the formula:

Page 191: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

NORMAL DISTRIBUTION-6

Thus, to find the z value for an X value, we calculate the difference between the given value and the mean µ and divide this difference by the standard deviation s. If the value X is equal to µ, then its zvalue is equal to zero. The z value for the mean of a normal distribution is always zero. Note that we will always round z values two decimal places.

Example 3 Let X be a continuous random variable that has a normal distribution with a mean of 50 and a standard deviation of 10. Convert X = 55 to a z value.

Solution: For the given normal distribution: µ = 50 and s = 10. The z value for X = 55 is computed as follows:

Thus, the z value for X = 55 is .50. The z values for µ = 50 and X = 55 are shown in figure 4-14. Note that the z value for µ = 50 is zero. The value z = .50 for X = 55 indicates that the distance between the mean µ = 50 of the given normal distribution and the point given by X = 55 is 1 /2 of the standard deviation s = 10. Consequently, we can state that the z value represents the distance between µ and X in terms of the standard deviation. Because X = 55 is greater than µ = 50, its zvalue is positive.

Figure 12 z value for X = 50

Example 4 Let X be a continuous random variable that has a normal distribution with a mean of 50 and a standard deviation of 10. Convert X = 35 to a z values.

Solution: For the given normal distribution: µ = 50 and s = 10. The z value for X = 35 is computed as follows.

Page 192: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

NORMAL DISTRIBUTION-7

Figure 13 z value for X = 35

Because X = 35 is on the left side of the mean (i.e., 35 is less than µ = 50), its z value is negative. As a general rule, whenever an X value is less than the value of µ, its z value is negative.

To find the area between two values of X for a normal distribution, we first convert both values of X to their respective z values. Then we find the area under the standard normal curve between those two z values. The area between the two z values gives the area between the corresponding X values.

Example 5 , Let X be a continuous random variable that is normally distributed with a mean of 25 and a standard deviation of 4. Find the area between X = 25 and X = 32.

Solution: For the given normal distribution: µ = 25 and s = 4. The first step in finding the required area is to standardize the given normal distribution by converting X = 25 and X = 32 to respective z values using the formula

The z value for X = 25 is zero because it is the mean of the normal distribution. The z value for X = 32 is

As shown in Figure 14, the area between X = 25 and X = 32 under the given normal distribution curve is equivalent to the area between z = 0 and z = 1.75 under the standard normal curve. This area from Area Under the Curve is .4599.

Figure 14

Area between X = 25 and area X = 32

Page 193: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

NORMAL DISTRIBUTION-8

2. T Scores and SAT Scores

Standard scores have one disadvantage: they are difficult to explain to someone who is not well versed in statistics. Since one task of behavioral scientists is to report test scores to people who are not statistically sophisticated, several alternatives to z scores have been developed. The mean and standard deviation of each such "common denominator" has been chosen so that all of the scores will be positive, and so that the mean and standard deviation will be easy to remember.

One such alternative, called T scores, is defined as a set of scores with a mean of 50 and a standard deviation of 10. The T scores are obtained from the following formula:

Each raw score is converted to a z score; each z score is multiplied by 10, and 50 is added to each resulting score. For example, using the data in example 4-11, the raw score of 32 is converted to a z score of + 1.75 by the usual formula. Then, T is equal to (10)(+1.75) + 50 or 67.50.

Since the mean of T scores is 50, you can still tell at a glance whether a score is above average (it will be greater than 50) or below average (it will be less than 50). Also, you can tell how many standard deviations above or below average a score is. For example, a score of 40 is exactly one standard deviation below average (equivalent to a z score of - 1.00) since the standard deviation of T scores is 10. A negative T score is mathematically possible but virtually never occurs; it would require that a person be over five standard deviations below average, and scores more than three standard deviations above or below the mean almost never occur with real data.

Scores on some nationally administered examinations, such as the Scholastic Aptitude Test (SAT), the College Entrance Examination Boards, and the Graduate Record Examination, are transformed to a scale with a mean of 500 and a standard deviation of 100. These scores, which we will call SAT scores for want of a better term, are obtained as follows:

SAT = 100z + 500

Figure 15 Relationship among area under the normal curve, standard deviation, percentiles, z, T, and SAT scores.

Page 194: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

NORMAL DISTRIBUTION-9

The raw scores are first converted to z scores; each z score is multiplied by 100, and 500 is added to each resulting score. The proof that this formula does yield a mean of 500 and a standard deviation of 100 is similar to that involving T scores. (In fact, an SAT score is just ten times a T score.) This explains the apparent mystery of how you can obtain a score of 642 on a test with only several hundred items. And you may well be pleased if you obtain a score of 642, since it is 142 points or 1.42 standard deviations above the mean (and therefore corresponds to a z score of + 1.42 and a T score of 64.2).

Page 195: Quality Course 2
Page 196: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmobi E

CAPABILITY PROCESS-1

PROCESS CAPABILITY

Minding Your Cpk's

The process potential index, or Cp, measures a process's potential capability, which is defined as the allowable spread over the actual spread. The allowable spread is the difference between the upper specification limit and the lower specification limit. The actual spread is determined from the process data collected and is calculated by multiplying six times the standard deviation, s. Thestandard deviation quantifies a process's variability. As the standard deviation increases in a process, the Cp decreases in value. As the standard deviation decreases (i.e., as the process becomes less variable), the Cp increases in value.

By convention, when a process has a Cp value less than 1.0, it is considered potentially incapable of meeting specification requirements. Conversely, when a process Cp is greater than or equal to 1.0, the process has the potential of being capable.

Ideally, the Cp should be as high as possible. The higher the Cp, the lower the variability with respect to the specification limits. In a process qualified as a Six Sigma process (i.e., one that allows plus or minus six standard deviations within the specifications limits), the Cp is greater than or equal to 2.0.

However, a high Cp value doesn't guarantee a production process falls within specification limits because the Cp value doesn't imply that the actual spread coincides with the allowable spread (i.e., the specification limits). This is why the Cp is called the process potential.

The process capability index, or Cpk, measures a process's ability to create product within specification limits. Cpk represents the difference between the actual process average and the closest specification limit over the standard deviation, times three.

By convention, when the Cpk is less than one, the process is referred to as incapable. When the Cpk is greater than or equal to one, the process is considered capable of producing a product within specification limits. In a Six Sigma process, the Cpk equals 2.0.

The Cpk is inversely proportional to the standard deviation, or variability, of a process. The higher the Cpk, the narrower the process distribution as compared with the specification limits, and the more uniform the product. As the standard deviation increases, the Cpk index decreases. At the same time, the potential to create product outside the specification limits increases.

Cpk can only have positive values. It will equal zero when the actual process average matches or falls outside one of the specification limits. The Cpk index can never be greater than the Cp, only equal to it. This happens when the actual process average falls in the middle of the specification limits.

Cpk vs. Ppk

Q: What is the difference between the Ppk values reported by Statit and the Cpk values? Why are they both reported? Which one is correct?

A: For Pp and Ppk calculations, the standard deviation used in the denominator is based on all of the data evaluated as one sample, without regard to any subgrouping. This is sometimes referred to as the overall standard deviation, total.

For Cp and Cpk calculations, the standard deviation is based on subgroups of the data using subgroups ranges, standard deviations or moving ranges. This “within-subgroup” process variation

Page 197: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmobi E

CAPABILITY PROCESS-2

can be considerably smaller than the overall standard deviation estimate, especially when there are long-term trends in the data.

When there are slow fluctuations or trends in the data, the estimate of the process variability based on the subgroups can be smaller than the estimate using all of the process data as one sample. This often occurs when the differences among observations within the subgroup are small, but the range of the entire dataset is significantly larger. Since the within-subgroup variation measures tend to ignore the range of the entire group, they can underestimate the overall process variation.

All of the observations and their variability as a group are what is important when characterizing the capability of a process to stay within the specification limits over time. Underestimating the variability will increase the process capability estimate represented by Cp or Cpk. However, these estimates may not be truly representative of the process.

The following box plot shows data where the within group variability is small, but there are both upward and downward trends in the data. There are a significant number of observations beyond the specification limits.

When the Process Capability procedure in Statit is performed based on this data, there are significant differences between the estimates of Pp and Cp (and, analogously, Ppk and Cpk).

Page 198: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmobi E

CAPABILITY PROCESS-3

For example, the calculated Cpk, which uses the within-subgroup estimate of the process variability is 1.077. This would typically be considered to represent a marginally capable process – one with only about 0.12% of the output beyond the specifications (12 out of 1000 parts). However, the calculated Ppk value, which uses the variability estimate of the total sample, is only 0.672. This would indicate a process that is not capable and probably produces a high percentage of output beyond the specifications. Note that the actual amount of production beyond the specifications is 5% or roughly 1 out of every 20 parts.

Which of these values are correct? Both are calculated correctly according to their equations, but here the Ppk value is probably the most representative of the ability of the process to produce parts within the specifications.

Note: One way to determine that the variability estimate is not truly representative of the process is to compare the Estimated and Actual values for the Product beyond Specifications in the Statistical output. If the estimated percentage of samples beyond specification is significantly different than the actual percentage reported, then more investigation and analysis of the data would be warranted to achieve the best Process Capability estimates possible based on the data.

Page 199: Quality Course 2

Preamble :

This article is devoted to the topic of process capability, with the objective of making people aware of this subjectand its significance to business success.The author believes that personal awareness is a prerequisite to personal action, and personal action is what weneed for success.

• It can be a source material for you to use in discussing this topic with your organization.• It will address issues like what is process capability, how to measure it, and how to calculate the process

capability indices (Cp, Cpk).• It will also attempt to explain the differences between process capability and process performance;

relationship between Cpk and non-conforming (defect) rate; and illustrate the four outcomes of comparingnatural process variability with customer specifications.

• Lastly a commentary is provided on precautions we should take while conducting process capabilitystudies.

What is Process Capability ?

1. Process capability is the long-term performance level of the process after it has been brought understatistical control. In other words, process capability is the range over which the natural variation of theprocess occurs as determined by the system of common causes.

2. Process capability is also the ability of the combination of people, machine, methods, material, andmeasurements to produce a product that will consistently meet the design requirements or customerexpectation.

What is a Process Capability Study ?

Process capability study is a scientific and a systematic procedure that uses control charts to detect and eliminatethe unnatural causes of variation until a state of statistical control is reached. When the study is completed, youwill identify the natural variability of the process.

Why Should I know the Capability of My Processes ?

• Process capability measurements allow us to summarize process capability in terms of meaningfulpercentages and metrics.

• To predict the extent to which the process will be able to hold tolerance or customer requirements. Basedon the law of probability, you can compute how often the process will meet the specification or theexpectation of your customer.

• You may learn that bringing your process under statistical control requires fundamental changes - evenredesigning and implementing a new process that eliminates the sources of variability now at work.

• It helps you choose from among competing processes, the most appropriate one for meeting customers'expectation.

• Knowing the capability of your processes, you can specify better the quality performance requirements fornew machines, parts and processes.

Page 200: Quality Course 2

Why Should I know the Capability of My Supplier's Processes ?

1. To set realistic cost effective part specifications based upon the customer's needs and the costsassociated by the supplier at meeting those needs.

2. To understand hidden supplier costs. Suppliers may not know or hide their natural capability limits in aneffort to keep business. This could mean that unnecessary costs could occur such as sorting to actuallymeet customer needs.

3. To be pro-active. For example, a Cpk estimation made using injection molding pressure measurementsduring a molding cycle may help reveal a faulty piston pressure valve ready to malfunction before theactual molded part measurements go out of specifications. Thus saving time and money.

Measures of Process Capability - Process Capability Indices:

Cp, Cpl, Cpu, and Cpk are the four most common and timed tested measures of process capability.

• Process capability indices measure the degree to which your process produces output that meets thecustomer's specification.

• Process capability indices can be used effectively to summarize process capability information in aconvenient unitless system.

• Cp and Cpk are quantitative expressions that personify the variability of your process (its natural limits)relative to its specification limits (customer requirements).

Following are the graphical details and equations quantifying process capability:

Where :

USL = UpperSpecification Limit

LSL = LowerSpecification Limit

X-Bar = Mean of theProcess

s = StandardDeviation of theProcess

INDEX ESTIMATED EQUATION USAGECp ( USL - LSL ) / 6s Process Capability for two - sided specification limit,

irrespective of process center.Cpu ( USL - X-Bar ) / 3s Process Capability relative to upper specification

limit.Cpl ( X-Bar - LSL ) / 3s Process Capability relative to lower specification

limit.Cpk Min. of ( Cpu , Cpl ) or

Distance between mean of the processand the closest spec. limit / 0.5 of theprocess variability.

Process Capability for two - sided specification limitaccounting for process centering.

Page 201: Quality Course 2

Notes :

1. If X-Bar is at target, then Cp = Cpk.2. Cpk will always be equal to or less than Cp.

The Cpk, Ppk Quandary :

In 1991, ASQ / AIAG task force published the "Statistical Process Control" reference manual, which presented thecalculations for capability indices ( Cp, Cpk ) as well as process performance indices ( Pp, Ppk ).

The difference between the two indices is the way the process standard deviation ( s ) is calculated.

Cpk uses s which is estimated using ( R-Bar / d2 ) or ( S-Bar / C2 ) .

Ppk uses the calculated standard deviation from individual data where s is calculated by the formula :

So the next question is which metric is best to report Cpk or Ppk ?In other words, which standard deviation to use - estimated or calculated ? Although both indices show similar information, they have slightly different uses.

• Ppk attempts to answer the question "does my current production sample meet specification ?" Processperformance indices should only be used when statistical control cannot be evaluated.

• On the other hand, Cpk attempts to answer the question "does my process in the long run meetspecification?" Process capability evaluation can only be done after the process is brought into statisticalcontrol. The reason is simple: Cpk is a prediction, and one can only predict something that is stable.

The readers should note that Ppk and Cpk indices would likely be similar when the process is in a state ofstatistical control.Notes :

1. As a thumb rule a minimum of 50 randomly selected samples must be chosen for process performancestudies and a minimum of 20 subgroups ( of sample size, preferably of at least 4 or 5 ) must be chosenfor process capability studies.

2. Cpk for all critical product measurements considered important by the customer should be calculated atthe beginning of initial production to determine the general ability of the process to meet customerspecifications. Then from time to time, over the life of the product, Cpks must be generated. A controlchart must always be maintained to check statistical stability of the process before capability is computed.

Process Capability and Defect Rate :

Using process capability indices it is easy to forget how much of product is falling beyond specification. Theconversion curve presented here can be a useful tool for interpreting Cpk with its corresponding defect levels. Thedefect levels or parts per million non-conforming were computed for different Cpk values using the Z scores andthe percentage area under the standard normal curve using normal deviate tables.

Page 202: Quality Course 2

The table below presents the non-conforming parts per million ( ppm ) for a process corresponding to Cpk valuesif the process mean were at target.

Cpk Value Sigma Value Area under Normal Curve Non Conforming ppm

0.1 0.3 0.235822715 764177.28510.2 0.6 0.451493870 548506.12990.3 0.9 0.631879817 368120.18350.4 1.2 0.769860537 230139.46340.5 1.5 0.866385542 133614.45760.6 1.8 0.928139469 71860.5310.7 2.1 0.964271285 35728.71480.8 2.4 0.983604942 16395.05770.9 2.7 0.993065954 6934.04611.0 3.0 0.997300066 2699.93441.1 3.3 0.999033035 966.96511.2 3.6 0.999681709 318.29141.3 3.9 0.999903769 96.2311.333 3.999 0.999936360 63.64031.4 4.2 0.999973292 26.70821.5 4.5 0.999993198 6.80161.6 4.8 0.999998411 1.58871.666 4.998 0.999999420 0.58021.7 5.1 0.999999660 0.34021.8 5.4 0.999999933 0.06681.9 5.7 0.999999988 0.0122.0 6.0 0.999999998 0.002

The Cpk conversion curve for process with mean at target is shown next.

Page 203: Quality Course 2

Explanation :A process with Cpk of 2.0 ( +/- 6 sigma capability), i.e., the process mean is 6 sigma away from the nearestspecification can be expected to have no more than 0.002 nonconforming parts per million.This process is so good that even if the process mean shifts by as much as +/- 1.5 sigma the process will produceno more than 3.4 non-conforming parts per million.

The next section provides the reader with some practical clarifications on Process Capability (Voice of theprocess ) and Specification ( Expectations of the customer ).

Natural Variability versus Specifications for Process Capability :

As seen from the earlier discussions, there are three components of process capability:

1. Design specification or customer expectation ( Upper Specification Limit, Lower Specification Limit )2. The centering of the natural process variation ( X-Bar ) 3. Spread of the process variation ( s )

A minimum of four possible outcomes can arise when the natural process variability is compared with the designspecifications or customer expectations:Case 1: Cpk > 1.33 ( A Highly Capable Process )

This process should produce less than 64 non-conforming ppm

A Highly Capable Process : Voice of the Process < Specification ( or Customer Expectations ).

This process will produce conforming products as long as it remains in statistical control. The process owner canclaim that the customer should experience least difficulty and greater reliability with this product. This shouldtranslate into higher profits.

Note : Cpk values of 1.33 or greater are considered to be industry benchmarks. This means that the process iscontained within four standard deviations of the process specifications.

Case 2: Cpk = 1 to 1.33 ( A Barely Capable Process )

This process will produce greater than 64 ppm but less than 2700 non-conforming ppm.

Page 204: Quality Course 2

A Barely Capable Process : Voice of the Process = Customer Expectations.

This process has a spread just about equal to specification width. It should be noted that if the process meanmoves to the left or the right, a significant portion of product will start falling outside one of the specification limits.This process must be closely monitored.

Note : This process is contained within three to four standard deviations of the process specifications.

Case 3: Cpk < 1 ( The Process is not Capable )

This process will produce more than 2700 non-conforming ppm.

A Non-Capable Process : Voice of the Process > Customer Expectations.

Page 205: Quality Course 2

It is impossible for the current process to meet specifications even when it is in statistical control. If thespecifications are realistic, an effort must be immediately made to improve the process (i.e. reduce variation) tothe point where it is capable of producing consistently within specifications.

Case 4: Cpk < 1 ( The Process is not Capable )

This process will also produce more than 2700 non-conforming ppm.

The variability ( s ) and specification width is assumed to be the same as in case 2, but the process average is off-center. In such cases, adjustment is required to move the process mean back to target. If no action is taken, asubstantial portion of the output will fall outside the specification limit even though the process might be instatistical control.

Assumptions, Conditions and Precautions :

Capability indices described here strive to represent with a single number the capability of a process. Much hasbeen written in the literature about the pitfalls of these estimates.Following are some of the precautions the readers should exercise while calculating and interpreting processcapability:

1. The indices for process capability discussed are based on the assumption that the underlying processdistribution is approximately bell shaded or normal. Yet in some situations the underlying processdistribution may not be normal. For example, flatness, pull strength, waiting time, etc., might natuallyfollow a skewed distribution. For these cases, calculating Cpk the usual way might be misleading. Manyresearchers have contributed to this problem. Readers are requested to refer to John Clements articletitled"Process Capability Calculations for Non-Normal Distributions" for details.

2. The process / parameter in question must be in statistical control. It is this author' s experience that thereis tendency to want to know the capability of the process before statistical control is established. Thepresence of special causes of variation make the prediction of process capability difficult and the meaningof Cpk unclear.

3. The data chosen for process capability study should attempt to encompass all natural variations.For example, one supplier might report a very good process capability value using only ten samplesproduced on one day, while another supplier of the same commodity might report a somewhat lesserprocess capability number using data from longer period of time that more closely represent the process.

Page 206: Quality Course 2

If one were to compare these process index numbers when choosing a supplier, the best supplier mightnot be chosen.

4. The number of samples used has a significant influence on the accuracy of the Cpk estimate.For example, for a random sample of size n = 100 drawn from a know normal population of Cpk = 1, the Cpkestimate can vary from 0.85 to 1.15 ( with 95 % confidence ). Therefore smaller samples will result in evenlarger variations of the Cpk statistics. In other words, the practitioner must take into consideration thesampling variation' s influence on the computed Cpk number. Please refer to Bissell and Chou, Owen, andBorrego for more on this subject.

Concluding Thoughts :

In the real world, very few processes completely satisfy all the conditions and assumptions required for estimatingCpk. Also, statistical debates in research communities are still raging on the strengths and weaknesses of variouscapability and performance indices. Many new complicated capability indices have also been invented and citedin literature. However, the key to effectual use of process capability measures continues to be the level of userunderstanding of what these measures really represent. Finally, in order to achieve continuous improvement, onemust always attempt to refine the "Voice of the Process" to match and then to surpass the "Expectations of theCustomer".

References :

1. Victor Kane, "Process Capability Indices", Journal of Quality Technology, Jan 1986.2. ASQ / AIAG, "Statistical Process Control", Reference Manual, 1995.3. John Clements, "Process Capability Calculations for Non-Normal Distributions", Quality Progress, Sept

1989.4. Forrest Breyfogle, "Measurement of Process Capability", Smarter Solutions, 1996.5. Bissell, "How Reliable is Your Capability Index", Royal Statistical Society, 1990.6. Chou, Owen, and Borrego, "Lower Confidence Limits of Process Capability Indices", Journal of Quality

Technology, Vol 22, No. 3, July 1990.

Page 207: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -1

Sampling Plan, Acceptance Sampling: LTPD & AQL

I. Sampling PlanSampling is one of the most important functions of quality control. In a large scale production environment, testing every single product from production lines is not cost effective because it would require a plethora of manpower and a great deal of time and space.

Consider a company that produces a hundred thousand tires a day, if the company is open 16 hours a day (two shifts) and it takes an employee 10 minutes to test a tire, it would need at least 2084 employees in the quality control department to test every single tire that comes out of production and a tremendous amount of space for the QA department and the inventory.

For a normally distributed production output, taking a sample of the output and testing it can help determine the quality level of the whole production. Sampling consists into testing a subset of the population in order to derive a conclusion for the whole population.

The sample statistics may not always be exactly the same as their corresponding population parameters. The difference is known as the sampling error.

I a. Acceptance Sampling Plan Acceptance sampling is an important field of statistical quality control that was popularized by Dodge and Romig and originally applied by the U.S. military to the testing of bullets during World War II. If every bullet was tested in advance, no bullets would be left to ship. If, on the other hand, none were tested, malfunctions might occur in the field of battle, with potentially disastrous results. Dodge reasoned that a sample should be picked at random from the lot, and on the basis of information that was yielded by the sample, a decision should be made regarding the disposition of the lot. In general, the decision is either to accept or reject the lot. This process is called Lot Acceptance Sampling or just Acceptance Sampling.

Acceptance sampling is “the middle of the road” approach between no inspection and 100% inspection. There are two major classifications of acceptance plans: by attributes (“go, no-go”) and by variables.

Acceptance sampling plans always lie someplace amid no inspection and 100% inspection. Sometimes there is no choice. If you must get product out the door and all you have in stock is rejected or defectivematerial, then you must sort through it and try to find the good ones. (I would also look for another supplier or two) This is very costly. (I would be trying to get the supplier to pick up your cost of doing their job) But, what choice is there? But you MUST meet customer’s requirements.

But first a little more on 100% inspection. I am a firm believer that YOU CAN NOT INSPECT QUALITY INTO A PRODUCT. 100% Inspection has an effectiveness of 40-65%. And that doesn’t count the 5-10% breakage. It is a waste of people power. (Formally known as man-power) I remember reading, I think it was in one of Juran’s books that 100% inspection is only 60% effective. But I have found it to be a little less than that.

A point to remember is that the main purpose of acceptance sampling is to decide whether or not the lot is likely to be acceptable, not to estimate the quality of the lot.

Acceptance sampling is employed when one or several of the following hold: Testing is destructive The cost of 100% inspection is very high 100% inspection takes too long

It was pointed out by Harold Dodge in 1969 that Acceptance Quality Control is not the same as Acceptance Sampling. The latter depends on specific sampling plans, which when implemented

Page 208: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -2

indicate the conditions for acceptance or rejection of the immediate lot that is being inspected. The former may be implemented in the form of an Acceptance Control Chart. The control limits for the Acceptance Control Chart are computed using the specification limits and the standard deviation of what is being monitored .

In 1942, Dodge stated: “….basically the “acceptance quality control” system that was developed encompasses the concept of protecting the consumer from getting unacceptable defective product, and encouraging the producer in the use of process quality control by: varying the quantity and severity of acceptance inspections in direct relation to the importance of the characteristics inspected, and in the inverse relation to the goodness of the quality level as indication by those inspections.” To reiterate the difference in these two approaches: acceptance sampling plans are one-shot deals, which essentially test short-run effects. Quality control is of the long-run variety, and is part of a well-designed system for lot acceptance.

Schilling (1989) said: “An individual sampling plan has much the effect of a lone sniper, while the sampling plan scheme can provide a fusillade in the battle for quality improvement.”

According to the ISO standard on acceptance control charts (ISO 7966, 1993), an acceptance control chart combines consideration of control implications with elements of acceptance sampling. It is an appropriate tool for helping to make decisions with respect to process acceptance. The difference between acceptance sampling approaches and acceptance control charts is the emphasis on process acceptability rather than on product disposition decisions.

According to Dr. Jim Stewart of Northern Illinois University in DeKalb, IL While working on my dissertation, I was reviewing some trade magazines from the 50’s. There were a number of case studies showing 50-75% efficiency and a breakage rate (visual inspections of wire wraps with pics) of 10-15%. Giving an effectiveness of 40-65%It was just about a year before the ‘Six Sigma’ era. Motorola had just came out with a couple of new radios. One was a KDT, a data terminal using 900Mhz, IBM was the pilot customer. The other was the STX 800Mhz two-way trunking radio and it was being sold as a pilot project to the City Of Miami’s Police Department. The quality of the KDT’s was so bad that rumor had it there were two IBM engineers testing each unit themselves before authorizing shipments. The STX radios had received several complains that the dispatchers could not hear the police officers in the field. So the development and process engineers got to work. But everything seemed to be fine on that end. Ergo, it MUST be the labor force. Motorola had released two new products, both from what at that time was the ‘Communication Sector’ now called ‘Land Mobile’ andbefore they got into full production there were a gross amount of customer complaints. I had already gone through, what at that time was the ASQC courses, where, as I recall, they said that 100% inspection is only 60% effective, the quality manager decided that we would 100% inspect the radios until we got ‘x’ good ones, then the sample number would decrease. Well I had the proverbial cow. I started my shift, being the only engineering support person on 2nd shift, I thought, if I have to look at every radio, then I’m going to collect data so that it won’t be a waste of time. The radio factory only collected data on the prototypes and maybe the first run. The Quality Engineering gurus were glad to see some data finally being taken on actual product being shipped. At that time, Motorola tested for a test called ‘Dev’ by modulating the radio with a 1khz tone. I thought to myself, this is silly, who talks at 1khz. (Common Sense) Since we were just whistling in to the radio and then checking the Motorola service monitor anyway, I started whistling not a single tone, but a sweeping tone and taking my data. If a reading was a little low, I would try around that frequency to see if I could produce a failure, usually caused be an external failure. At the end of the shift, I went to the office, and on a Compaq ‘portable’, (you know, that white Comapq with the green screen that looked like a sewing machine when closed up and someone had to guts to call a ‘portable’), and with SQCpak entered my data. I then printed distribution, X-Bar and Range control charts and left them on my boss’s desk. The next day he went to the daily product meeting and presented the Development Engineering Manager with my charts.Fortunately the Development Engineering Manager actually looked at the charts and noticed that consistently, my ‘Dev’ numbers where better than productions and what they saw in the prototype. Production had an automatic testing machine to test the radios, and the quality department tested the radios by hand. The day after that, the Development Engineering Manager waited for me to come in and gave me some data the development team had collected to put into the SPC, (statistical process control), software. The results were exactly what they had gotten before. The very next day development engineering watched me to find out what I was doing different. It was of course the sweeping frequencies. (i.e. more ‘real world’ like)This better ‘Dev’ number allowed them to change a part in the radio that fixed the problem with the STX radio. Now, or at least as of my last day there, Motorola was testing all new radios with this sweeping frequency. What’s my point, we could have 100% inspected

Page 209: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -3

radios for years and never found the problem. We were looking in the wrong place. So if you find yourself needing to 100% inspect, at least collect data and do the control charts, even just X Bar, Range and a Distribution charts can tell you a lot. You never know what it may reveal until you do. Shortly after that I was promoted to Component Engineering where I first heard of Six Sigma. I wrote software that allowed Motorola and their suppliers to use a standard SPC package to aid in starting their Six Sigma program. This allowed suppliers to submit sample data and control charts with every shipment coming into VQA. Without this software, it would have been a mess trying to train the inspectors what to look for with the different SPC charts and without training the inspectors, the Quality Engineers would have to look at each shipment and that would have just made them over priced inspectors. The concepts of Six Sigma are much larger than just measurements of course. It’s also about getting a Return On Net Investments,(RONI), and Return On Net Assets, (RONA).

There are two major classifications of acceptance plans, (or AQL, Acceptable Quality Level), by attributes or discrete data (“go, no-go”) and by variable data, (real numbers). The attribute case is the most common for an acceptance sampling.

SampleA subset of population used instead of the entire population for improved accuracy and reduces cost. Sampling is more accurate in statistical process control for several reasons including it is less susceptible for ‘inspection fatigue’ in a process, because there is a lesser chance for errors. Products that are 100% inspected sometimes are found to have an astounding number of defects.

a) Systematic sampling Methodology for sampling in which units are selected from the population at a regular interval (e.g., once an hour, every other lot, etc.). This is used for many SPC, (statistical process control), control charts like X Barand R Bar charts. One reason for this type of sampling is to see if the process has any variations due to different times of the day, shift workers changing, temperature changes during day vs. night times, etc

b) Stratified sampling The act of dividing a larger population into subgroups, using systematic sampling, then taking a random sample from each subgroup. Random sampling can frequently minimize the sampling error in the population. This in turn increases the precision of any estimation methods used. You MUST be sure to select relevant variables for the grouping process. The information, or data, must be as accurate as possible. This is a method of sampling with any given population could vary significantly depending on the grouping and how uniform the grouping is.

c) Sampling bias When data is influence in one way or another so that the data no longer represents the entire population.

d) Random sampling Technique that insures each item in a sample for inspection is selected completely by chance to be measured. You should never do 100% inspection in any six sigma quality plan. Sampling is more accurate in statistical process control for several reasons including it is less susceptible for ‘inspection fatigue’ in a process, because there is a lesser chance for errors. Products that are 100% inspected sometimes are found to have a astounding number of defects.

By using SPC and other control charts with a control plan, you can measure fewer production piece parts, (typically using a systematic sampling plan in unison with stratified sampling insuring that sampling bias does not become an issue), thus saving time and labor, and in theory, still have the same confidence level of the quality of you process. Famous errors and lessons learned in random sampling can be attributed to Literary Digest magazine in 1936. They took a random sample from the telephone listings for a pre-election poll. The 10 million samples taken during the depression years were not accurate because in 1936 the people who could afford a phone and magazine subscription did not constitute a random sample. The company was soon out of business.

AQL – Acceptable Quality LevelThis is usually defined as the worst case quality level, in percentage or ratio, that is still considered acceptable.QA, (Quality Assurance), may be in charge of monitoring AQLs’.

Page 210: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -4

If a produced unit can have a number of different defects, then demerits can be assigned to each type of defect and product quality measured in terms of demerits. As an AQL is an acceptable level, the probability of acceptance for an AQL lot should be high. (Typical values between 92.74% to 99.999996% for six sigma, see Cpk compares to PPM for value reasons)

Some sources characterize an acceptable quality level as the highest percent defective that should be considered reasonable as the process average. Usually monitored using SPC, (Statistical Process Control), at the production levels by Quality inspection.

Standard military sampling procedures, (MIL-STD), have been used for over 50 years to achieve these goals. The MIL-STD defines AQL as… “the maximum percent defective (or the maximum number of defects per hundred units) that, for purposes of sampling inspection, can be considered satisfactory as a process average.”

Suppose a population of 10 bolts has diameter measures of 9, 11, 12, 12,14, 10, 9, 8, 7, 9. The mean

for that population would be 10.1. If a sample of the following three measures - 9, 14, 10- is taken from the population, the mean of the sample would be (9 + 14 +10)/3 = 11 and the sampling error

Let's take another sample of three measures 7, 12 and 11. This time the mean will be 10 and the

sampling error

If another sample is taken and estimated, its sampling error might be different. These differences are said to be due to chance. So if it is possible to make mistakes while estimating the population's parameters from a sample, how can we be sure that sampling can help get a good estimate? Why use sampling as a mean of estimating the population parameters?

The central limit theorem can help us answer these questions.

Ib. Central Limit Theorem

The Central limit theorem states that for sufficiently large sample sizes , regardless of the shape of the population distribution, if samples of size n are randomly drawn from a population that has

a mean and a standard deviation , the samples' means , are approximately normally distributed. If the populations are normally distributed, the samples' means are normally distributed regardless of the sample sizes.

Where is the mean of the samples and is the standard deviation of the samples.

The implication of this theorem is that for sufficiently large populations, the normal distribution can be used to analyze samples drawn from populations that are not normally distributed or which shapes are unknown.

When means are used as estimators to make inferences about a population parameters, and ,then the estimator will be approximately normally distributed in repeated sampling.

Page 211: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -5

Ic. Sampling distribution of We have seen in the example of the bolts diameters that the mean of the first sample was 11, the mean of the second was 10. If the means of all possible samples are obtained and organized we could derive the Sampling distribution of the means.

In that example, we had 10 bolts, if all possible samples of 3 were computed, there would have been 120 samples and means.

The mean and standard deviation of that sampling distribution are given as:

Example 1:

Gajaga-electronics is a company that manufactured circuit boards, the average imperfection on a board

is with a standard deviation when the production process is under control.

A random sample of circuit boards has been taken for inspection and a mean of defects

per board was found. What is the probability of getting a value of if the process is under control?

Solution:Since the sample size is greater than 30, the central limit theorem can be used in this case even though the number of defects per board in this case follows a Poisson distribution. Therefore, the distribution of

the sample mean is approximately normal with the standard deviation

corresponds to 0.4948 on the table of Normal curve areas.

The probability of getting a value of is 0.5 + 0.4948 = 0.9948.

The previous example is valid for an extremely large population. Sampling from a finite population will require some adjustment called the finite correction factor:

Page 212: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -6

Z will therefore become equal to

Example 2:

A city's 450 restaurant employees average $35 tips a day with a standard deviation of 9. a sample of 50 employees is taken, what is the probability that the sample will have an average of less than $37 tips a day.

Solution:

On the Z-score table, 1.77 corresponds to .4616 therefore, the probability of getting an average daily tip of less than $37 will be .4616 + .5= .9616.

If the Finite correction factor was not taken into account, z would have been 1.57 which corresponds to .4418 on the z score table and therefore the probability of having a daily tip of less than $37 would have been .9418.

Id. Sampling distribution of When the data being analyzed are measurable as in the case of the two previous examples or in the case of distance or income, the sample mean is often privileged. However, when that data are countable as in the case of people in a group or defective items on a production line, the sample

proportion is the statistic of choice.

The Sample proportion applies to situations that would have required a Binomial distribution where is

the probability for a success and q the probability for a failure with .When a random sample of n trials is selected from a Binomial population (an experiment with n identical trials with each trial having only two possible outcomes considered as success or failure) with

parameter p , the sampling distribution of the sample proportion will be Where x is the number of success.

Page 213: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -7

The mean and standard deviation will be

If then the sampling distribution of can be approximated using the normal distribution.

Example 3:In a sample of 100 workers, 25 might be coming late once a week.

, the sample proportion of the late comers will be 25/100 = 0.25. In that example,

If and , the Central limit theorem applies to the sample proportion.

The Z formula for the sample proportion is given as:

Where:

= Sample proportion P = Population proportion n = Sample size q = 1- p

Example 4: 40% of the parts that come of a production line are defective, what is the probability of taking a random sample of size 75 from the line and finding that .7 or less are defective?

Solution:

On the standard normal distribution table, 3.54 corresponds to 0.4998. So the probability of finding 70% or less defective parts is 0.5 + 0.4998 = 0.9998.

Page 214: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -8

Example 5: 40% of all the employees have signed up for the stock option plan. An HR specialist believes that this ratio is too high. She takes a sample of 450 employees and finds that 200 have signed up. What is the probability of getting a sample proportion larger than this if the population proportion is really 0.4?

Solution:

Which corresponds to 0.4582 on the Standard normal distribution table.

The probability of getting a sample proportion larger than 0.4 will be 0.5 - 0.4582 = 0.0418.

Ie. Estimating the population mean with large sample sizes Suppose a company has just developed a new process for prolonging the life of a light bulb. The engineers want to be able to date each bulb to determine its longevity, yet it is not possible to test each bulb in a production process that generates hundreds of thousands of bulbs a day. But they can take a random sample and determine its average longevity and from there, they can estimate the longevity of the whole population.

Using the central limit theorem, we have determined that the z value for sample means can be used for large samples.

By rearranging this formula, we can derive the value of .

Since Z can be positive or negative, the next formula would be more accurate

In other terms will be within the following confidence interval:

where:

is the lower confidence limit LCL and is the upper confidence limit UCL

Page 215: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -9

But a confidence interval presented as such does not take into account the area under the normal curve that is outside the confidence interval.

We estimate with some confidence that the mean is within the interval:

but we cannot be absolutely certain that it is unless the confidence interval is 100%.

For a two-tailed normal curve, if we want to be 95% sure that is within that interval, then the confidence interval will be equal to .95 (1 - or 1 - .05) and the areas under the tails will be

then which corresponds to 1.96 on the Z -table.

The confidence interval should be rewritten as

or:

The table below shows the most commonly used confidence coefficients and their Z-score values.

Confidence interval (1 - )

0.90 0.10 1.645

0.95 0.05 1.96

0.99 0.01 2.58

Example 6: A survey of companies that use solar panels as a primary source of electricity was conducted. the question that was asked was this: How much of the electricity used in your company comes from the solar panels? A random sample of 55 responses produced a mean of 45 megawatts. suppose the population standard deviation for this question is 15.5 megawatts. Find the 95% confidence interval for the mean.

Solution:

We can be 95% sure that the mean will be between 40.9 and 49.1 megawatt, in other words the probability for the mean to be between 40.9 and 49.1 will be 0.95.

Page 216: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -10

When the sample size is large , the sample's standard deviation can be used as an estimate of the population standard deviation.

Example 7: A sample of 200 circuit boards was taken from a production line and it showed the number of average defects to be 7 and a standard deviation of 2. What is the 95% confidence interval for the population

average ?

Solution:

In repeated sampling, 95% of the confidence intervals will enclose the average defects per circuit board

for the whole population .

Example 8: What would the interval be like if the confidence interval were 90%?

Solution:

In repeated sampling, 95% of the confidence intervals will enclose the average defects per circuit board

for the whole population .

If. Estimating the population mean with small sample sizes and unknown

a t -Distribution

We have seen that when the population is normally distributed and the standard deviation is known,

can be estimated to be within the interval . But as in the case of the above example,

is not known. in these cases, it can be replaced by S , the sample's standard deviation and is found

within the interval . Replacing with S can only be a good if approximation if the sample sizes are large, i.e. n>30.

In fact, the Z formula has been determined not to always generate normal distributions for small sizes even if the population is normally distributed.

So in the case of small samples and when is not known, the t -distribution is used instead.

Page 217: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -11

The formula for that distribution is given as:

The right side of this equation is identical to the one of the Z formula but the tables used to determine the values are different from the ones used for the z values.

Just as in the case of the z formula, the t can also be manipulated to estimate , but since the sample sizes are small, in order not to produce a biased result, we need to convert the them to degrees of freedom df. df = n - 1.

So the mean will be found within the interval

or

Example 9: A manager of a car rental company wants to number of times luxury cars are rented a month, she takes a random sample of 19 cars that produce the following result: 3 7 12 5 9 13 2 8 6 14 6 1 2 3 2 5 11 13 5 She wants to use these data to construct a 95% confidence interval to estimate the average.

Solution: 3 + 7 + 12 + 5 + 9 +13 + 2 + 8 + 6 + 14 + 6 + 1 + 2 + 3 + 2 + 5 +11 +13 + 5 = 127

The probability for to be between 4.64 and 8.72 is 0.99.

b2 DistributionIn most cases, in quality control, the objective of the auditor is not to find the mean of a population but rather to determine the level of variation of the output. He would for instance want to know how much variation the production process exhibits about the target in order to see what adjustments are needed to reach a defect free process.

We have already seen that the sample variance is determined as:

Page 218: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -12

The formula for single variance is given as:

The shape of the resembles the normal curve but it is not symmetrical and its shape depends on the degree of freedom.

The formula can be rearranged to find . will be within the interval

With a degree of freedom n - 1.

Example 10: A sample of 9 screws was taken out of a production line and the values are as follow: 13.00mm13.00mm12.00mm12.55mm12.99mm12.89mm12.88mm12.97mm12.99mm

We are trying to estimate the population variance with 95% confidence. Solution: We need to determine the point of estimate which is the sample's variance.

With a degree of freedom df of n – 1 = 8. Since we want to estimate with a confidence level of 95%,

Page 219: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -13

So will be within the following interval:

From the table, the values of and for a degree of freedom of 8 are respectively 17.5346 and 2.17973.

So the confidence interval becomes:

And

The probability for to be between 0.0512 and 0.412 is 0.95.

Ig. Estimating sample sizesIn most cases, sampling is used in quality control to make an inference for a whole population because of the cost associated to actually studying every individual part of that population. But then again, the question of the sample size arises. What size of a sample best reflects the condition of the whole population being estimated? Should we consider a sample of 150 or 1000 of products from a production line to determine the quality level of the output?

a. Sample size when estimating the mean At the beginning of this chapter, we defined the sampling error E as being the difference between the

sampling mean and the population mean .

We also have seen, when studying the Sampling Distribution of that when is being determined, we can use the Z formula for sampling means.

We can clearly see that the nominator is nothing but the Sampling error E. We can therefore replace

by E in the Z formula and come up with:

We can determine n from this equation:

Page 220: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -14

Example 11: A production manager at a call center wants to know how much time should an employee spend on the phone with a customer on average. She wants to be within 2 minutes of the actual length of time and the standard deviation of the average time spent is known to be 3 minutes. What sample size of calls should she consider if she wants to be 95% confident of her result?

Solution

Since we cannot have 8.6436 calls, we can round up the result to 9 calls. The manager can be 95% confident that with a sample of 9 calls she can determine the average length of time an employee needs to spend on the phone with a customer.

b Sample size when estimating the population proportion To determine the sample size needed when estimating P, we can use the same procedure as the one

we used when determining the sample size for .

We have already seen that the Z formula for the sample proportion is given as:

The Error of estimation (or sampling error) in this case will be

We ca replace by its value in the Z formula and obtain:

We can derive n from this equation.

Page 221: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -15

Example 12: A study is being conducted to determine the extent to which companies promote Open Book Management. The question asked to employees is: Do your managers provide you with enough information about the company? It was previously estimated that only 30% of the companies did actually provide the information needed to their employees. If the researcher wants to be 95% confident in the results and be within 0.05 of the true population proportion, What size of sample should she take?

She must take a sample of 323 companies.

II. LTPD- Tolerance Percent Defective LTPD is a Lot Tolerance Percent Defective . The LTPD of a sampling plan is a level of quality routinely rejected by the sampling plan. It is generally defined as that level of quality (percent defective, defects per hundred units, etc. X 100%)) that the sampling plan will accept 10% (or/and reject 90% ) of the time. This means lots at or worse than the LTPD are accepted at most 10% of the time. In other words, they are rejected at least 90% of the time. The LTPD can be determined using the OC curve by finding that quality level on the bottom axis that corresponds to a probability of acceptance of 0.10 (10%) on the left axis.

Associated with the LTPD is a confidence statement one can make. If the lot passes the sampling plan, one can state with 90% confidence that the quality level (defective rate, etc.) is below the LTPD (i.e., the defective rate of the lot > LTPD). On the other hand, if a lot passes the sampling plan, then one can state with 90% confidence that its quality level is equal to or better than the LTPD (passing the sampling plan demonstrates that the LTPD has been meet).

The LTPD of the sampling plan describes what the sampling plan will reject, but it is also important to know what the sampling plan will accept. Information on what the sampling plan will accept is provided by the AQL of the sampling plan.

The LTPD is used to help describe the protection provided a sampling plan. But it only provides half the answer. It describes what the sampling plan will reject. We would also like to know what the sampling plan will accept. The answer to this second question is provided by the AQL.

Lot Tolerance Percent Defective (LTPD), expressed in percent defective, is the poorest quality in an individual lot that should be accepted. The LTPD has a low probability of acceptance. In many sampling plans, the LTPD is the percent defective having a 10% probability of acceptance using an accepted sampling plan. With this plan, the producer agrees to produce just enough nonconforming product such the consumer will accept the lot using the agreed to sampling plan and AQL level.

(LTPD and LQ and RQL mean exactly the same thing.) AQL and Alpha together specify the fraction defective (AQL) of a lot that the plan will have a small probability (Alpha) of rejecting. (1-Alpha) = the probability of accepting a lot if it contains AQL fraction defective. AQL and Alpha define the “producer’s point” of the operating characteristic curve of the plan. (oc curve). It is called the “producer’s point” because it satisfies the producer’s intentions of usually accepting lots if those lots are truly AQL fraction defective.

Page 222: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -16

RQL and Beta together specify the fraction defective (RQL) of a lot that the plan that will have a small probability (Beta) of accepting. RQL and Beta together define the “consumer’s point” of the oc curve. RQL and Beta define the “consumer’s point” of the operating characteristic curve of the plan. (oc curve). It is called the “consumer’s point” because it satisfies the consumer’s intentions of usually rejecting lots if those lots are truly RQL fraction defective.

Some of the standards manuals neglect to mention RQL/LTPD/LQ. They bury the consumer’s point in the structure of the sampling plan tables so that you do not have to think about it. Using such standards, you cannot be sure that the sampling plan that you choose will classify a lot as acceptable or rejectable the way that you want it to. The advantage of such standards is that by using them you can say truthfully that you are following the standard. This might make you politically safe but not technically safe from a quality standpoint.

The above description is for attribute sampling plans. If you have variables data you must additionally use the within-lot standard deviation. If your variables sampling plan is for the mean of a measured variable, AQL-mean and RQL-mean are used.

III. ACCEPTANCE SAMPLING / AQL What is it and what are acceptance sampling plans? Acceptance sampling is a procedure used for sentencing incoming batches. The most widely used plans are given by the Military Standard tables, which were developed during World War II.

Types of acceptance sampling plans Sampling plans can be categorized across several dimensions:

Sampling by attributes vs. sampling by variables: When the item inspection leads to a binary result (either the item is conforming or nonconforming) or the number of nonconformities in an item are counted, then we are dealing with sampling by attributes. If the item inspection leads to a continuous measurement, then we are sampling by variables. Incoming vs. outgoing inspection: If the batches are inspected before the product is shipped to the consumer, it is called outgoing inspection. If the inspection is done by the consumer, after they were received from the supplier, it is called incoming inspection. Rectifying vs. non-rectifying sampling plans: Determines what is done with nonconforming items that were found during the inspection. When the cost of replacing faulty items with new ones, or reworking them is accounted for, the sampling plan is rectifying. Single, double, and multiple sampling plans: The sampling procedure may consist of drawing a single sample, or it may be done in two or more steps. A double sampling procedure means that if the sample taken from the batch is not informative enough, another sample is taken. In multiple sampling, additional samples can be drawn after the second sample.

AQL - Acceptable Quality Level The AQL of a sampling plan is a level of quality routinely accepted by the sampling plan. It is generally defined as that level of quality (percent defective, defects per hundred units, etc.) that the sampling plan will accept 95% of the time. This means lots at or better than the AQL are accepted at least 95% of the time. The AQL can be determined using the OC curve by finding that quality level on the bottom axis that corresponds to a probability of acceptance of 0.95 (95%) on the left axis.

Acceptable Quality Level (AQL) is the maximum percent defective that is considered satisfactory as a process average by the producer and consumer. In other words, if, on average, 4% (AQL=4.0) nonconforming product is acceptable to BOTH the producer and consumer, then the producer agrees to produce, on average, 4% nonconforming product.

OC Curve - Operating Characteristic Curve shows how the probability of acceptance (y-axis) depends on the quality level (bottom axis).

Page 223: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -17

Acceptance sampling is an important aspect of statistical quality control. It originated back in World War II when the military had to determine which batches of ammunition to accept and which ones to reject. They knew that they couldn't test every bullet to determine if it will do its job in the field. On the other hand, they had to be confident that the bullets they're getting will not fail when their lives are already on the line. Acceptance sampling was the answer - testing a few representative bullets from the lot so they'll know how the rest of the bullets will perform.

Acceptance sampling is a compromise between not doing any inspection at all and 100% inspection. The scheme by which representative samples will be selected from a population and tested to determine whether the lot is acceptable or not is known as an acceptance plan or sampling plan. There are two major classifications of acceptance plans: based on attributes ("go, no-go") and based on variables. Sampling plans can be single, double or multiple. A single sampling plan for attributes consists of a sample of size n and an acceptance number c. The procedure operates as follows: select n items at random from the lot. If the number of defectives in the sample set is less than c, the lot is accepted. Otherwise, the lot is rejected.

In order to measure the performance of an acceptance or sampling plan, the Operating Characteristic (OC) curve is used. This curve plots the probability of accepting the lot (Y-axis) versus the lot fraction or percent defectives.

A single sampling plan, as previously defined, is specified by the pair of numbers (n,c). The sample size is n, and the lot is rejected if there are more than c defectives in the sample; otherwise the lot is accepted.

There are many distinctive approaches that could be used to initiate such plans by two widely used ways of picking (n,c) as follow as:

Use tables (such as MIL STD 105D) that focus on either the AQL or the LTPD desired.

a) Two-Point Method b) Reverse of the Two-Point Method

Specify 2 desired points on the OC curve and solve for the (n,c) that uniquely determines an OC curve going through these points.

c) OC-Curve to name just a few.

III. a. Use tables (such as MIL STD 105D) that focus on either the AQL or the LTPD

desired.

1 Choosing a Sampling Plan: MIL Standard 105D

Choosing a Sampling Plan: MIL Standard 105D

Sampling plans are typically set up with reference to an acceptable quality level, or AQL . The AQL is the base line requirement for the quality of the producer's product. The producer would like to design a sampling plan such that the OC curve yields a high probability of acceptance at the AQL. On the other side of the OC curve, the consumer wishes to be protected from accepting poor quality from the producer. So the consumer establishes a criterion, the lot tolerance percent defective or LTPD . Here the idea is to only accept poor quality product with a very low probability. Mil. Std. plans have been used for over 50 years to achieve these goals.

The U.S. Department of Defense Military Standard 105E Standard military sampling procedures for inspection by attributes were developed during World War II. Army Ordnance tables and procedures were generated in the early 1940's and these grew into the Army Service Forces tables. At the end of the war, the Navy also worked on a set of tables. In the meanwhile, the Statistical Research Group at Columbia University performed research and outputted many outstanding results on attribute sampling plans.

Page 224: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -18

These three streams combined in 1950 into a standard called Mil. Std. 105A. It has since been modified from time to time and issued as 105B, 195C and 105D. Mil. Std. 105D was issued by the U.S. government in 1963. It was adopted in 1971 by the American National Standards Institute as ANSI Standard Z1.4 and in 1974 it was adopted (with minor changes) by the International Organization for Standardization as ISO Std. 2859. The latest revision is Mil. Std 105E and was issued in 1989. These three similar standards are continuously being updated and revised, but the basic tables remain the same. Thus the discussion that follows of the germane aspects of Mil. Std. 105E also applies to the other two standards.

Description of Mil. Std. 105D This document is essentially a set of individual plans, organized in a system of sampling schemes. A sampling scheme consists of a combination of a normal sampling plan, a tightened sampling plan, and a reduced sampling plan plus rules for switching from one to the other.

The foundation of the Standard is the acceptable quality level or AQL. In the following scenario, a certain military agency, called the Consumer from here on, wants to purchase a particular product from a supplier, called the Producer from here on.

In applying the Mil. Std. 105D it is expected that there is perfect agreement between Producer and Consumer regarding what the AQL is for a given product characteristic. It is understood by both parties that the Producer will be submitting for inspection a number of lots whose quality level is typically as good as specified by the Consumer. Continued quality is assured by the acceptance or rejection of lots following a particular sampling plan and also by providing for a shift to another, tighter sampling plan, when there is evidence that the Producer's product does not meet the agreed-upon AQL.

Mil. Std. 105E offers three types of sampling plans: single, double and multiple plans. The choice is, in general, up to the inspectors. Because of the three possible selections, the standard does not give a sample size, but rather a sample code letter. This, together with the decision of the type of plan yields the specific sampling plan to be used.

In addition to an initial decision on an AQL it is also necessary to decide on an "inspection level". This determines the relationship between the lot size and the sample size. The standard offers three general and four special levels.

The steps in the use of the standard can be summarized as follows:

1. Decide on the AQL.2. Decide on the inspection level.3. Determine the lot size. 4. Enter the table to find sample size code letter.5. Decide on type of sampling to be used.6. Enter proper table to find the plan to be used. 7. Begin with normal inspection, follow the switching rules and the rule for stopping the inspection (if

needed).

2 Lot Acceptance Sampling Plans (LASPs)

Lot Acceptance Sampling Plans (LASPs)

A lot acceptance sampling plan (LASP) is a sampling scheme and a set of rules for making decisions. The decision, based on counting the number of defectives in a sample, can be to accept the lot, reject the lot, or even, for multiple or sequential sampling schemes, to take another sample and then repeat the decision process.

LASPs fall into the following categories as follow as: Single sampling plans:One sample of items is selected at random from a lot and the disposition of the lot is determined from the resulting information. These plans are usually denoted as (n,c) plans for a sample size n, where the lot is

Page 225: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -19

rejected if there are more than c defectives. These are the most common (and easiest) plans to use although not the most efficient in terms of average number of samples needed.

Double sampling plans:After the first sample is tested, there are three possibilities:

1. Accept the lot 2. Reject the lot3. No decision

If the outcome is (3), and a second sample is taken, the procedure is to combine the results of both samples and make a final decision based on that information.

Multiple sampling plans:This is an extension of the double sampling plans where more than two samples are needed to reach a conclusion. The advantage of multiple sampling is smaller sample sizes.

Sequential sampling plans:This is the ultimate extension of multiple sampling where items are selected from a lot one at a time and after inspection of each item a decision is made to accept or reject the lot or select another unit.

Skip lot sampling plans:Skip lot sampling means that only a fraction of the submitted lots are inspected.

Deriving a plan, within one of the categories listed above, is discussed in the pages that follow. All derivations depend on the properties you want the plan to have. These are described using the following terms:

Acceptable Quality Level (AQL): The AQL is a percent defective that is the base line requirement for the quality of the producer's product. The producer would like to design a sampling plan such that there is a high probability of accepting a lot that has a defect level less than or equal to the AQL..

Lot Tolerance Percent Defective (LTPD): The LTPD is a designated high defect level that would be unacceptable to the consumer. The consumer would like the sampling plan to have a low probability of accepting a lot with a defect level as high as the LTPD.

Type I Error (Producer's Risk): This is the probability, for a given (n,c) sampling plan, of rejecting a lot that has a defect level equal to the AQL. The producer suffers when this occurs, because a lot with acceptable quality was rejected. The symbol is commonly used for the Type I error and typical values for range from 0.2 to 0.01.

Type II Error (Consumer's Risk): This is the probability, for a given (n,c) sampling plan, of accepting a lot with a defect level equal to the LTPD. The consumer suffers when this occurs, because a lot with

unacceptable quality was accepted. The symbol is commonly used for the Type II error and typical values range from 0.2 to 0.01.

Operating Characteristic (OC) Curve: This curve plots the probability of accepting the lot (Y-axis) versus the lot fraction or percent defectives (X-axis). The OC curve is the primary tool for displaying and investigating the properties of a LASP.

Average Outgoing Quality (AOQ): A common procedure, when sampling and testing is non-destructive, is to 100% inspect rejected lots and replace all defectives with good units. In this case, all rejected lots are made perfect and the only defects left are those in lots that were accepted. AOQ's refer to the long term defect level for this combined LASP and 100% inspection of rejected lots process. If all lots come in with a defect level of exactly p, and the OC curve for the chosen (n,c) LASP indicates a probability pa of accepting such a lot, over the long run the AOQ can easily be shown to be:

where N is the lot size.

Average Outgoing Quality Level (AOQL): A plot of the AOQ (Y-axis) versus the incoming lot p (X-axis) will start at 0 for p = 0, and return to 0 for p = 1 (where every lot is 100% inspected and rectified). In between, it will rise to a maximum. This maximum, which is the worst possible long term AOQ, is called the AOQL..

Average Total Inspection (ATI): When rejected lots are 100% inspected, it is easy to calculate the ATIif lots come consistently with a defect level of p. For a LASP (n,c) with a probability pa of accepting a lot with defect level p, we have

Page 226: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -20

ATI = n + (1 - pa) (N - n)

where N is the lot size.

Average Sample Number (ASN): For a single sampling LASP (n,c) we know each and every lot has a sample of size n taken and inspected or tested. For double, multiple and sequential LASP's, the amount of sampling varies depending on the number of defects observed. For any given double, multiple or sequential plan, a long term ASN can be calculated assuming all lots come in with a defect level of p. A plot of the ASN, versus the incoming defect level p, describes the sampling efficiency of a given LASP scheme.

IIIb. Specify 2 desired points on the OC curve and solve for the (n,c) that uniquely

determines an OC curve going through these points.

Choosing a Sampling Plan with a given OC Curve

We start by looking at a typical OC curve. The OC curve for a (52 ,3) sampling plan is shown below.

It is instructive to show how the points on this curve are obtained, once we have a sampling plan (n,c) - later we will demonstrate how a sampling plan (n,c) is obtained.

We assume that the lot size N is very large, as compared to the sample size n, so that removing the sample doesn't significantly change the remainder of the lot, no matter how many defects are in the sample. Then the distribution of the number of defectives, d, in a random sample of n items is approximately binomial with parameters n and p, where p is the fraction of defectives per lot.

The probability of observing exactly d defectives is given by binomial distribution

The probability of acceptance is the probability that d, the number of defectives, is less than or equal to c, the accept number. This means that

Page 227: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -21

Sample table for Pa, Pd using the binomial distributionUsing this formula with n = 52 and c=3 and p = .01, .02, ...,.12 we find

Solving for (n,c): Equations for calculating a sampling plan with a given OC curve

In order to design a sampling plan with a specified OC curve one needs two designated points. Let us design a sampling plan such that the probability of acceptance is 1- for lots with fraction defective p1 and the

probability of acceptance is for lots with fraction defective p2. Typical choices for these points are: p1 is the

AQL, p2 is the LTPD and , are the Producer's Risk (Type I error) and Consumer's Risk (Type II error),respectively.

If we are willing to assume that binomial sampling is valid, then the sample size n, and the acceptance number c are the solution to

These two simultaneous equations are nonlinear so there is no simple, direct solution. There are however a number of iterative techniques available that give approximate solutions so that composition of a computer program poses few problems.

Calculating AOQ's: Average Outgoing Quality (AOQ)

We can also calculate the AOQ for a (n,c) sampling plan, provided rejected lots are 100% inspected and defectives are replaced with good parts.

Assume all lots come in with exactly a p0 proportion of defectives. After screening a rejected lot, the final fraction defectives will be zero for that lot. However, accepted lots have fraction defectivep0. Therefore, the outgoing lots from the inspection stations are a mixture of lots with fractions defective p0 and 0. Assuming the lot size is N, we have.

For example, let N = 10000, n = 52, c = 3, and p, the quality of incoming lots, = 0.03. Now at p = 0.03, we

Page 228: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -22

glean from the OC curve table that pa = 0.930 and

AOQ = (.930)(.03)(10000-52) / 10000 = 0.02775.

Sample table of AOQ versus p

Setting p = .01, .02, ..., .12, we can generate the following table

A plot of the AOQ versus p is given below

From examining this curve we observe that when the incoming quality is very good (very small fraction of defectives coming in), then the outgoing quality is also very good (very small fraction of defectives going out). When the incoming lot quality is very bad, most of the lots are rejected and then inspected. The "duds" are eliminated or replaced by good ones, so that the quality of the outgoing lots, the AOQ, becomes very good. In between these extremes, the AOQ rises, reaches a maximum, and then drops.

The maximum ordinate on the AOQ curve represents the worst possible quality that results from the rectifying inspection program. It is called the average outgoing quality limit, (AOQL ).

From the table we see that the AOQL = 0.0372 at p = .06 for the above example. One final remark: if N >> n, then the AOQ ~ pa p

Calculating the Average Total Inspection: The Average Total Inspection (ATI)

What is the total amount of inspection when rejected lots are screened? If all lots contain zero defectives, no lot will be rejected.

If all items are defective, all lots will be inspected, and the amount to be inspected is N.Finally, if the lot quality is 0 < p < 1, the average amount of inspection per lot will vary between the sample size n, and the lot size N.

Let the quality of the lot be p and the probability of lot acceptance be pa, then the ATI per lot is

ATI = n + (1 - pa) (N - n)

For example, let N = 10000, n = 52, c = 3, and p = .03 We know from the OC table that pa = 0.930. Then ATI = 52 + (1-.930) (10000 - 52) = 753. (Note that while 0.930 was rounded to three decimal places, 753 was obtained using more decimal places.)

Page 229: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -23

Sample table of ATI versus p

Setting p= .01, .02, ....14 generates the following table

A plot of ATI versus p, the Incoming Lot Quality (ILQ) is given below.

Double Sampling Plans

Double and multiple sampling plans were invented to give a questionable lot another chance. For example, if in double sampling the results of the first sample are not conclusive with regard to accepting or rejecting, a second sample is taken. Application of double sampling requires that a first sample of size n1 is taken at random from the (large) lot. The number of defectives is then counted and compared to the first sample's acceptance number a1 and rejection number r1. Denote the number of defectives in sample 1 by d1 and in sample 2 by d2, then:

If d1 a1, the lot is accepted.

If d1 r1, the lot is rejected. If a1 < d1 < r1, a second sample is taken.

If a second sample of size n2 is taken, the number of defectives, d2, is counted. The total number of defectives is D2 = d1 + d2. Now this is compared to the acceptance number a2 and the rejection number r2 of sample 2. In double sampling, r2 = a2 + 1 to ensure a decision on the sample.

If D2 a2, the lot is accepted.

If D2 r2, the lot is rejected.

Design of a Double Sampling Plan

The parameters required to construct the OC curve are similar to the single sample case. The two points of

interest are (p1, 1- ) and (p2, , where p1 is the lot fraction defective for plan 1 and p2 is the lot fraction defective for plan 2. As far as the respective sample sizes are concerned, the second sample size must be equal to, or an even multiple of, the first sample size.

There exist a variety of tables that assist the user in constructing double and multiple sampling plans. The index to these tables is the p2/p1 ratio, where p2 > p1. One set of tables, taken from the Army Chemical Corps

Engineering Agency for = .05 and = .10, is given below:

Page 230: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -24

ExampleWe wish to construct a double sampling plan according to

p1 = 0.01 = 0.05 p2 = 0.05 = 0.10 and n1 = n2

The plans in the corresponding table are indexed on the ratio R = p2/p1 = 5

We find the row whose R is closet to 5. This is the 5th row (R = 4.65). This gives c1 = 2 and c2 = 4. The value of n1 is determined from either of the two columns labeled pn1.

The left holds constant at 0.05 (P = 0.95 = 1 - ) and the right holds constant at 0.10. (P = 0.10). Then

holding constant we find pn1 = 1.16 so n1 = 1.16/p1 = 116. And, holding constant we find pn1 = 5.39, so n1 = 5.39/p2 = 108. Thus the desired sampling plan is

n1 = 108 c1 = 2 n2 = 108 c2 = 4

If we opt for n2 = 2n1, and follow the same procedure using the appropriate table, the plan is:

n1 = 77 c1 = 1 n2 = 154 c2 = 4

The first plan needs less samples if the number of defectives in sample 1 is greater than 2, while the second plan needs less samples if the number of defectives in sample 1 is less than 2.

ASN Curve for a Double Sampling Plan

Since when using a double sampling plan the sample size depends on whether or not a second sample is required, an important consideration for this kind of sampling is the Average Sample Number (ASN) curve. This curve plots the ASN versus p', the true fraction defective in an incoming lot.

We will illustrate how to calculate the ASN curve with an example. Consider a double-sampling plan n1 = 50, c1= 2, n2 = 100, c2 = 6, where n1 is the sample size for plan 1, with accept number c1, and n2, c2, are the sample size and accept number, respectively, for plan 2.

Page 231: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -25

Let p' = .06. Then the probability of acceptance on the first sample, which is the chance of getting two or less defectives, is .416 (using binomial tables). The probability of rejection on the second sample, which is the chance of getting more than six defectives, is (1-.971) = .029. The probability of making a decision on the first sample is .445, equal to the sum of .416 and .029. With complete inspection of the second sample, the average size sample is equal to the size of the first sample times the probability that there will be only one sample plus the size of the combined samples times the probability that a second sample will be necessary. For the sampling plan under consideration, the ASN with complete inspection of the second sample for a p' of .06 is

50(.445) + 150(.555) = 106

The general formula for an average sample number curve of a double-sampling plan with complete inspection of the second sample is

ASN = n1P1 + (n1 + n2)(1 - P1) = n1 + n2(1 - P1)

where P1 is the probability of a decision on the first sample. The graph below shows a plot of the ASN versus p'.

The ASN curve for a double sampling plan

Skip Lot Sampling

Skip Lot sampling means that only a fraction of the submitted lots are inspected. This mode of sampling is of the cost-saving variety in terms of time and effort. However skip-lot sampling should only be used when it has been demonstrated that the quality of the submitted product is very good.

A skip-lot sampling plan is implemented as follows:1. Design a single sampling plan by specifying the alpha and beta risks and the consumer/producer's

risks. This plan is called "the reference sampling plan 2. Start with normal lot-by-lot inspection, using the reference plan. 3. When a pre-specified number, i, of consecutive lots are accepted, switch to inspecting only a fraction

f of the lots. The selection of the members of that fraction is done at random. 4. When a lot is rejected return to normal inspection

The parameters f and i are essential to calculating the probability of acceptance for a skip-lot sampling plan. In this scheme, i, called the clearance number, is a positive integer and the sampling fraction f is such that 0 < f< 1. Hence, when f = 1 there is no longer skip-lot sampling. The calculation of the acceptance probability for the skip-lot sampling plan is performed via the following formula

Page 232: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -26

where P is the probability of accepting a lot with a given proportion of incoming defectives p, from the OC curve of the single sampling plan.

The following relationships hold:for a given i, the smaller is f, the greater is Pa

for a given f, the smaller is i, the greater is Pa

An illustration of a a skip-lot sampling plan is given below.

An important property of skip-lot sampling plans is the average sample number (ASN ). The ASN of a skip-lot sampling plan is

ASNskip-lot = (F)(ASNreference)

where F is defined by

Therefore, since 0 < F < 1, it follows that the ASN of skip-lot sampling is smaller than the ASN of the reference sampling plan.

In summary, skip-lot sampling is preferred when the quality of the submitted lots is excellent and the supplier can demonstrate a proven track record.

Military Standard 105E (ISO 2859, ANSI/ASQC Z1.4) The original version of the standard (MIL STD 105A) was issued in 1950. The last revision (MIL STD 105E) was issued in 1989, but canceled in 1991. The standard was adopted by the International Standards Organization as ISO 2859.

The tables give inspection plans for sampling by attributes for a given batch size and acceptable quality level (AQL). An inspection plan includes: the sample size/s (n), the acceptance number/s (c), and the rejection number/s (r). The single sampling procedure with these parameters is as follows: Draw a random sample of n items from the batch. Count the number of nonconforming items within the sample (or the number of nonconformities, if more than one nonconformity is possible on a single item). If the number of nonconforming items is c or less, accept the entire batch. If it is r or more then reject it. In

Page 233: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -27

most cases r =c+1 (for double and multiple plans, there are several values for the sample sizes, acceptance, and rejection numbers).

The standard includes three types of inspection (normal, tightened, and reduced inspection). The type of inspection that should be applied depends on the quality of the last batches inspected. At the beginning of inspection, normal inspection is used. The types of inspection differ as follows:

Tightened inspection (for a history of low quality) requires a larger sample size than in under normal inspection.

Reduced sampling (for a history of high quality) has a higher acceptance number relative to normal inspection (so it is easier to accept the batch)

There are special switching rules between the three types of inspection, as well as a rule for discontinuation of inspection. These rules are empirically based.

Supplier and Consumer Risks The supplier risk is the risk that the a batch of high quality (according to the AQL) is rejected. The consumer risk is the risk that a batch of low quality will be accepted. The military standard plans assure a supplier risk of 0.01-0.1 (depending on the plan). The only way to control the consumer risk is by changing the inspection level.

Batch/LotA batch is a collection of items from which a sample will be drawn, for deciding on its conformance to the acceptance inspection. A batch should include items of the same type, size, etc. and that were produced under the same production conditions and time. The batch size is the number of items in a lot or a batch.

Clearance NumberThe number of consecutive items (or batches, in Skip lot sampling) that must be found conforming, in order to quit the screening phase (100% inspection) when applying continuous sampling.

Inspection Levels for Military Standard 105E (MIL-STD-105E)The inspection level determines the relation between the batch size and sample size. Levels I, II, and III are general inspection levels:

Level II is designated as normal. Level I requires about half the amount of inspection as level II, and is used when reduced

sampling cost are required and a lower level of discrimination (or power) can be tolerated. Level III requires about twice the amount of inspection as level II, and is used when more

discrimination (or power) is needed. The four special inspection levels S-1,S-2,S-3,S-4 use very small samples, and should be employed when small sample sizes are necessary, and when large sampling risks can be tolerated.

Inspection Levels for Military Standard 414 (MIL-STD-414)The inspection level determines the relation between the batch size and sample size. Levels I, II, III, IV, V are general inspection levels:

Level IV is designated as normal. Level V requires a larger amount of inspection than level IV, and is used when more

discrimination (or power) is needed. Levels I,II,III require less inspection than level II, and is used when reduced sampling costs

are required, and lower level of discrimination (or power) can be tolerated.

Maximal run length valueThe largest number on the horizontal axis, in the run length plot. Or, the largest value of t on the plot for which P(RL=t) is plotted. For example, selecting 500 will give a probability plot of run-lengths in the range 1,2,...,500.

Page 234: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -28

Nonconforming itemsThe nonconformity of an item is expressed as the percent of nonconforming items. When each item can contain more than one defect, the nonconformity of an item is expressed as the number of non-conformities (defects) per 100 items.

Percent/Proportion Non-Conforming (p)The percent or proportion of non-conforming items in a batch or in a process. In many cases this is unknown, but it is used to learn about scenarios for different values of p.

Rejection Limit (r)The smallest number of non-conforming items in a sample that would lead to the rejection of the entire lot. In most cases (besides reduced sampling) this value is equal to the acceptance limit -1.

Run LengthThe run length is the number of samples taken until an alarm is signaled by the control chart.

Sample size (n)The number of items that should be randomly chosen from a batch.

Sampling Fraction fThe proportion of items (or batches, in Skip lot sampling) that are inspected during some phase, when applying continuous sampling. f is between 0 and 1. There are three ways to sample with a fraction of f:

1. Probability Sampling: Each item/batch is sampled with probability f.2. Systematic Sampling: Every 1/f 'th item/batch is sampled.1/f must then be a natural

number (e.g., every 3rd item is inspected, when f=1/3).3. Block-Random Sampling: From each 1/f consecutive items/batches, one is chosen at

random. 1/f must then be a natural number (e.g., in each block of 3 items one is chosen, when f=1/3).

Shift sizeThe purpose of using a control chart is to detect a shift in the process mean, of a specific size. To detect a shift of two standard-deviations-of-the mean, enter the value 2.

Type of InspectionThere are three types of inspection:

Normal inspection is used at the start of the inspection activity.

Tightened inspection is used when the vendor's recent quality history has deteriorated (acceptance criteria are more stringent than under normal inspection).

Reduced inspection is used when the vendor's recent quality history has been exceptionally good (sample sizes are usually smaller than under normal inspection).

IV ACCEPTANCE SAMPLING (Update by Dr. Wayne A. Taylor)

INTRODUCTIONAn understanding of statistical principles and how to apply them is critical to ensuring compliance with FDA requirements, such as those in the current and July working draft of the good manufacturing practices (GMP) regulation. Indeed numerous Form 483's issued by FDA following GMP inspections have cited sampling plans for final, in-process, and receiving inspection of devices and their components for not being "statistically valid". What exactly is required for a sampling plan to be statistically valid? After a brief overview of how sampling plans work, this article will address that question.

Several other sampling-related issues currently facing the medical device industry will also be discussed. In February of this year, the U.S. Department of Defense canceled Mil-Std-105E, which contained a widely used table of sampling plans. What are the alternatives and how should they be

Page 235: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -29

used? As the industry focuses increasingly on the prevention of defects and statistical process control (SPC), will the need for acceptance sampling disappear? And finally, how can the cost of acceptance sampling be reduced to help manufacturers remain competitive in today's marketplace?

HOW SAMPLING PLANS WORKStated simply, sampling plans are used to make product disposition decisions for each lot of product. With attribute sampling plans, these accept/reject decisions are based on a count of the number of defects and defectives, while variables sampling plans require measurements and calculations, with decisions based on the sample average and standard deviation. Plans requiring only a single sample set are known as single sampling plans; double and multiple sampling plans may require additional samples sets. For example, an attribute single sampling plan with a sample size n=50 and an accept number a=1 requires that a sample of 50 units be inspected. If the number of defectives in that sample is one or zero, the lot is accepted. Otherwise it is rejected.

Ideally, when a sampling plan is used, all bad lots will be rejected and all good lots accepted. However, because accept/reject decisions are based on a sample of the lot, there is always a chance of making an incorrect decision. So what protection does a sampling plan offer? The behavior of a sampling plan can be described by its operating characteristic (OC) curve, which plots percent defectives versus the corresponding probabilities of acceptance. Figure 1 shows the OC curve of the attribute single sampling plan described above. With that plan, if a lot is 3% defective the corresponding probability of acceptance is 0.56. Similarly, the probability of accepting lots that are 1% defective is 0.91 and the probability of accepting lots that are 7% defective is 0.13.

Figure 1: OC Curve of Single Sampling Plan n=50 and a=1

An OC curve is generally summarized by two pints on the curve: the acceptable quality level (AQL) and the lot tolerance percent defective (LTPD). The AQL describes what the sampling plan generally accepts; formally, it is that percent defective with a 95% percent chance of acceptance. The LTPD, which describes what the sampling plan generally rejects, is that percent defective with a 10% chance of acceptance. As shown in Figure 2, the single sampling plan n=50 and a=1 has an AQL of 0.72% defective and an LTPD of 7.6%. The sampling plan routinely accepts lots that are 0.72% or better and rejects lots that are 7.6% defective or worse. Lots that are between 0.72% and 7.6% defective are sometimes accepted and sometimes rejected.

Page 236: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -30

Figure 2: AQL and LTPD of Single Sampling Plan n=50 and a=1

Manufacturers must know and document the AQLs and LTPDs of the sampling plans used for their products. The AQLs and LTPDs of individual sampling plans can be found in Table X of MIL-STD-105E, and Chart XV of ANSI Z1.4 gives the AQLs and LTPDs of entire switching systems (described below).

1,2 Software also can be used to obtain the AQLs and LTPDs of a variety of sampling plans.

3

SELECTING STATISTICALLY VALID SAMPLING PLANS Documenting the protection provided by your company's sampling plans is only half the job. You must also provide justification for the AQLs and LTPDs used. This requires that the purpose of each inspection be clearly defined. Depending on past history and other circumstances, sampling plans can be used for a variety of purposes. In the examples described below, an AQL of 1.0% is specified for inspections for major defects. The AQL given in this specification is not necessarily equal to the sampling plan AQL, and so it will be referred to as Spec-AQL to make this distinction clear.

Spec-AQLs are commonly interpreted as the maximum percent defective for which acceptance is desired. Lots below the Spec-AQL are best accepted; Lots above the Spec-AQL are best rejected. The Spec-AQL, therefore, represents the break-even quality between acceptance and rejection. For lots with percent defectives below the Spec-AQL, the cost of performing a 100% inspection will exceed the benefits of doing so in terms of fewer defects released. Since this cost is ultimately passed on to the customer, it is not in the customer's best interest for the manufacturer to spend $1000 to 100% inspect a lot if only one defect is found that otherwise would have cost the customer $100. Spec-AQLs should not be interpreted as permission to produce defects; however, once lots have been produced, the Spec-AQLs provide guidance on making product disposition decisions.

Example 1: If a process is known to consistently produces lots with percent defectives above the Spec-AQL, all lots should be 100% inspected, but if some lots are below the Spec-AQL, the company could use a sampling plan to screen out lots not requiring 100% inspection. To ensure that lots worse than the Spec-AQL are rejected, a sampling plan with an LTPD equal to the Spec-AQL can be used, but at the risk of rejecting some acceptable lots. For a Spec-AQL of 1.0%, the single sampling plan n=230 and a=0, which has an LTPD of 1.0%, would be appropriate. There is a simple formula for determining the sample size for such studies. Assuming an accept number of zero and a desired confidence level of 90%, the required sample size is

n = 230/Spec-AQL

For 95% confidence, the formula is

n = 300/Spec-AQL

Page 237: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -31

Example 2: The same sampling plan might also be used to validate a process for which there is no prior history. Before reduced levels of inspection are implemented, it should be demonstrated that the process regularly produces lots below the Spec-AQL. If the first three lots pass inspections using a sampling plan with an LTPD equal to the Spec-AQL of 1.0%, the manufacturer can state with 90% confidence that each of these lots is <1% defective.

However, other sampling plans might be better choices. Suppose the process is expected to yield lots that are around 0.2% defective. The sampling plan n=230 and a=0 has an AQL of 0.022% and therefore runs a sizeable risk of failing the validation procedure. A sampling plan with an AQL of 0.2% and an LTPD of 1% would be a better choice. Using the software cited earlier, the resulting plan is n=667 and a=3.

3

Example 3: Once it has been established that the process consistently produces lots with percent defectives below the Spec-AQL, the objective of future inspections might be to ensure that lots with >=4% defective are not released. This requires a sampling plan with an LTPD of 4%. Because, the sampling plan should also ensure that lots below the Spec-AQL are released, the plan's AQL should be equal to the Spec-AQL. According to Table I, which gives a variety of sampling plans indexed by their AQLs and LTPDs, the single sampling plan n=200 and a=4 is the closest match.

3 It has an LTPD of

3.96% and an AQL of 0.990%, and thus is statistically valid for this purpose.

Example 4: Now suppose that the process has run for 6 months with an average yield of 0.1% defectives and no major problems. Although the process has a good history, there is still some concern that something could go wrong; as a result, the manufacturer should continue to inspect a small number of samples from each lot. For example, a sampling plan might be selected that ensures that a major process failure resulting in >=20% defective will be detected on the first lot. The sampling data can then be trended to detect smaller changes over extended periods of time.

When selecting a sampling plan to detect a major process failure, the nature of the potential failure modes should be considered. If the primary failure mode of concern is a clogged filter and past failures have resulted in >=20% defectives, the single sampling plan n=13 and a=0, which has an LTPD of 16.2% and an AQL of 0.4%, is statistically valid.

3 If the potential failure mode of concern is a failure to

add detergent to the wash cycle, with a resulting failure rate of 100%, the single sampling plan n=1 and a=0 is valid.

Example 5: Finally, one might have a proven process for which procedures are in place that minimize the likelihood of a process failure going undetected. At that point, acceptance sampling might be limited to the routine collection of data sufficient to plot process-average tends. There is nothing wrong with simply stating in the written procedures that acceptance sampling is not needed and that the inspections being performed should be considered as process audits.

In summary, selecting a statistically valid sampling plan is a two-part process. First, the purpose of the inspection should be clearly stated and the appropriate AQL and LTPD selected; then, a sampling plan should be selected based on the chosen AQL and LTPD. Because different sampling plans may be statistically valid at different times, all plans should be periodically reviewed. If a medical device manufacturer doesn't know the protection provided by its sampling plan or is unclear as to the purposes of its, it is at risk.

MIL-STD-105E AND ANSI Z1.4The February 1995 cancellation of MIL-STD-105E by the Department of Defense, while directly affecting military purchasing contracts, will also have an impact on the medical device and diagnostic industries. The cancellation notice indicates that future acquisitions should refer to an acceptable nongovernment standard such as ANSI Z1.4. To many, this news came as a shock. However, the change is not nearly as drastic as it first seems. The differences between ANSI Z1.4 and MIL-STD-105E are minor and the elimination of the later was basically a cost-savings measure to eliminate the duplication of effort associated with maintaining two nearly equivalent standards.

Page 238: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -32

Nevertheless, because manufacturers may need to update many specifications as a result of this change, now is an especially appropriate time to reexamine MIL-STD-105E and Z1.4 and how to use them to select valid sampling plans. Although the term Z1.4 will be used in the following discussion, all of the ensuing comments apply equally to MIL-STD-105E.

Lets us start with what Z1.4 is not. Used by many industries, Z1.4 is not a table of statistically valid sampling plans. Instead, it contains a broad array of sampling plans that might be of interest to anyone. For example, one plan requires two samples and has an accept number of 30. Such a plan would never be appropriate for a medical device, but is applicable in other industries.

Furthermore, Z1.4 is, in fact, a sampling system. A user references the sampling plans in Z1.4 by specifying an AQL and a level of inspection, and then following a set of procedures for determining what sampling plan to use based both on lot size and the quality of past lots. The Z1.4 system includes tightened, normal, and reduced sampling plans and a set of rules for switching between them. Although these switching rules are frequently ignored, they are an integral part of the standard. As Z1.4 states:

This standard is intended to be used as a system employing tightened, normal, and reduced inspection on a continuing series of lots .... Occasionally specific individual plans are selected from the standard and used without the switching rules. This is not the intended application of the ANSI Z1.4 system and its use in this way should not be referred to as inspection under ANSI Z1.4.

2

Several companies have received Form 483s from FDA for not using the switching rules, a problem that could have been avoided by having written procedures specifying that the switching rules are not used. When is the use of switching rules appropriate and when should individual sampling plans be selected instead? Z1.4 was developed specifically to induce suppliers "to maintain a process average at least as good as the specified AQL while at the same time providing an upper limit on the consideration of the consumer's risk of accepting occasional poor lots."

2 Thus, the Z1.4 switching system should not be

used to inspect isolated lots, nor should they be used to specify the level of protection for individual lots. In those cases individual plans should be selected instead.

One situation warrants special mention. Acceptance sampling is frequently used for processes that generally produce good product but might on occasion break down and produce high levels of defects. If protection against isolated bad lots or the first bad lot following a series of good lots is the key concern, the Z1.4 switching rules should not be used or, if they are, the reduced inspection should be omitted. Because the Z1.4 switching rules are designed to react to gradual shifts in the process average, they frequently fail to detect isolated bad lots and do not react quickly to sudden shifts in the lot quality. Even when appropriate, the Z1.4 switching rules are complicated to apply. However, quick switching systems have been developed that are both simpler to use and provide better protection during periods of changing quality.

3

Finally, there are two common misconceptions about Z1.4. Many people believe that the required sample sizes increase for larger lots because more samples are required from such lots to maintain the desired level of protection. The truth is that the standard specifies larger sample sizes to increase the protection provided for larger lots. The reason for this increase is based on economics: It is more expensive to make errors classifying large lots; as a result, Z1.4 requires more samples from larger lots to reduce the risk of such errors. To maintain the same level of protection, one can simply select a sampling plan based on its OC curve and then use this plan for all lots regardless of size. The single sampling plan n=13 and a=0 provides the same protection for a lot of 200 units as for a lot of 200,000 units.

3,4

The second misconception is that use of Z1.4 ensures that lots worse than the AQL are rejected. According to this misconception, if the AQL is 1%, lots with >1% defectives are routinely rejected. The truth is that there is a sizable risk of releasing such lots--one sampling plan with an AQL of 1% accepts lots that are <=16% defective. The protection provided by the sampling plan is determined by its LTPD, not AQL, which reveals nothing about what a sampling plan will reject. As a result of this misconception,

Page 239: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -33

many manufacturers believe that their sampling plans provide greater protection than they do. This illusion can lead to the use of inappropriate sampling plans and can provide a false sense of security. Repeating the advice given earlier, manufacturers should determine and document the actual AQLs and LTPDs of all their sampling plans.

While Z1.4 and equivalent standards are widely used by the device industry, rarely are they used in the manner intended. Most commonly, individuals sampling plans are selected from them. Other tables are better suited for this purpose, and companies should not be afraid to switch to using those tables. In addition, using Z1.4 does not ensure valid sampling plans and, in fact, can complicate the selection process.

SPC VERSUS ACCEPTANCE SAMPLING?Much has been written about the greater benefits to be achieved by using SPC as opposed to acceptance sampling. But although preventing defects is certainly more desirable than detecting them through inspection, SPC does not eliminate the need for acceptance sampling. As indicated in Table II, there are fundamental differences between the two techniques. In SPC, control charts are used to make process control and process improvement decisions, and actions are taken on the process to ensure that future products are good. In contrast, sampling plans are used to make product disposition decisions, and actions are taken on previously produced lots to ensure the quality of released product. Ideally, with SPC in place no defectives will ever be made and acceptance sampling will become unnecessary: in practice, however, all processes have some risk of failure, and thus some procedure for accepting and rejecting product is generally required.

Table II: Differences Between Control Charts and Sampling Plans

ControlChart

SamplingPlan

Decision Adjust or Leave Alone

Accept or Reject

Act On Process Product

Focus FutureProduct

Past Product

Much of the reaction against acceptance sampling is attributed to quality guru W. Edwards Deming, who many believe advocated its elimination. However, what Deming really called for was ceasing reliance on acceptance sampling. If more time and resources are spent on acceptance sampling than on process improvement and control, or if a company believes that, no matter what else happens, its sampling plan ensures shipment of only good product, then that company is overly reliant on acceptance sampling. Instead, its focus should be on defect prevention and continuous process improvement.

Page 240: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -34

Table I: Single Sampling Plans Indexed by AQL and LTPD

AQL Approximate Ratio of LTPD/AQL

45 11 6.5 5 4 3.2 2.8 2.3 2

10% - 4/(1,2) AQL = 9.76 LTPD = 68.0

9/(2,3)AQL = 9.77 LTPD = 49.0

14/(3,4) AQL = 10.4

LTPD = 41.70

20/(4,5) AQL = 10.4 LTPD = 36.1

32/(6,7) AQL = 10.7 LTPD = 30.6

50/(8,9) AQL = 9.72 LTPD = 24.7

80/(12,13)AQL = 9.89 LTPD = 21.4

125/(18,19)AQL = 10.2 LTPD = 19.3

6.5% - 6/(1,2) AQL = 6.28 LTPD = 51.0

13/(2,3) AQL = 6.61 LTPD = 36.0

20/(3,4) AQL = 7.13 LTPD = 30.4

32/(4,5) AQL = 6.36 LTPD = 23.4

50/(6,7) AQL = 6.76

LTPD = 20.10

80/(8,9) AQL = 6.00 LTPD = 15.7

125/(12,13)AQL = 6.26 LTPD = 13.9

200/(18,19)AQL = 6.31 LTPD = 12.2

4.0% - 9/(1,2) AQL = 4.10 LTPD = 36.8

20/(2,3) AQL = 4.22 LTPD = 24.5

32/(3,4) AQL = 4.38 LTPD = 19.7

50/(4,5) AQL = 4.02 LTPD = 15.4

80/(6,7) AQL = 4.18

LTPD = 12.89

125/(8,9) AQL = 3.81 LTPD = 10.2

200/(12,13)AQL = 3.89

LTPD = 8.760

315/(18,19)AQL = 3.99 LTPD = 7.77

2.5% 2/(0,1) AQL = 2.53 LTPD = 68.4

13/(1,2) AQL = 2.81 LTPD = 26.8

32/(2,3) AQL = 2.60 LTPD = 15.8

50/(3,4) AQL = 2.78 LTPD = 12.9

80/(4,5) AQL = 2.49 LTPD = 9.74

125/(6,7)AQL = 2.66 LTPD = 8.27

200/(8,9) AQL = 2.37 LTPD = 6.42

315/(12,13)AQL = 2.46 LTPD = 5.59

500/(18,19)AQL = 2.50 LTPD = 4.92

1.5% 3/(0,1) AQL = 1.70 LTPD = 53.6

20/(1,2) AQL = 1.81 LTPD = 18.1

50/(2,3) AQL = 1.66 LTPD = 10.4

80/(3,4) AQL = 1.73 LTPD = 8.16

125/(4,5) AQL = 1.59 LTPD = 6.29

200/(6,7)AQL = 1.65 LTPD = 5.21

315/(8,9) AQL = 1.50 LTPD = 4.09

500/(12,13)AQL = 1.54 LTPD = 3.54

800/(18,19)AQL = 1.56 LTPD = 3.08

1.0% 5/(0,1) AQL = 1.02 LTPD = 36.9

32/(1,2) AQL = 1.12 LTPD = 11.6

80/(2,3) AQL = 1.03 LTPD = 6.52

125/(3,4)AQL = 1.10 LTPD = 5.27

200/(4,5) AQL = 0.990 LTPD = 3.96

315/(6,7)AQL = 1.05 LTPD = 3.32

500/(8,9) AQL = 0.942 LTPD = 2.59

800/(12,13)AQL = 0.964 LTPD = 2.21

1250/(18,19)AQL = 0.998 LTPD = 1.98

0.65% 8/(0,1) AQL = 0.639 LTPD = 25.03

50/(1,2) AQL = 0.715 LTPD = 7.56

125/(2,3)AQL = 0.657 LTPD = 4.20

200/(3,4)AQL = 0.686 LTPD = 3.31

315/(4,5) AQL = 0.627 LTPD = 2.52

500/(6,7)AQL = 0.659 LTPD = 2.10

800/(8,9) AQL = 0.588 LTPD = 1.62

1250/(12,13)AQL = 0.616 LTPD = 1.42

2000/(18,19)AQL = 0.623 LTPD = 1.24

0.4% 13/(0,1) AQL = 0.394 LTPD = 16.2

80/(1,2) AQL = 0.446 LTPD = 4.78

200/(2,3)AQL = 0.410 LTPD = 2.64

315/(3,4)AQL = 0.435 LTPD = 2.11

500/(4,5) AQL = 0.395 LTPD = 1.59

800/(6,7)AQL = 0.411 LTPD = 1.31

1250/(8,9) AQL = 0.376 LTPD = 1.04

2000/(12,13)AQL = 0.385 LTPD = 0.888

-

0.25% 20/(0,1) AQL = 0.256 LTPD = 10.9

125/(1,2) AQL = 0.285 LTPD = 3.08

315/(2,3)AQL = 0.260 LTPD = 1.685

500/(3,4)AQL = 0.274 LTPD = 1.338

800/(4,5) AQL = 0.247 LTPD = 0.997

1250/(6,7)AQL = 0.263 LTPD = 0.841

2000/(8,9) AQL = 0.235 LTPD = 0.649

- -

0.15% 32/(0,1) AQL = 0.160 LTPD = 6.94

200/(1,2) AQL = 0.178 LTPD = 1.93

500/(2,3)AQL = 0.164 LTPD = 1.06

800/(3,4)AQL = 0.171 LTPD = 0.833

1250/(4,5)AQL = 0.158 LTPD = 0.638

2000/(6,7)AQL = 0.164 LTPD = 0.526

- - -

0.1% 50/(0,1) AQL = 0.103 LTPD = 4.50

315/(1,2) AQL = 0.113 LTPD = 1.23

800/(2,3)AQL = 0.102 LTPD = 0.664

1250/(3,4)AQL = 0.109 LTPD = 0.534

2000/(4,5)AQL = 0.0986 LTPD = 0.399

- - - -

0.065% 80/(0,1) AQL = 0.0641 LTPD = 2.84

500/(1,2) AQL = 0.0711 LTPD = 0.776

1250/(2,3)AQL = 0.0655 LTPD = 0.425

2000/(3,4)AQL = 0.0683 LTPD = 0.334

- - - - -

0.04% 125/(0,1) AQL = 0.0411 LTPD = 1.83

800/(1,2) AQL = 0.0444 LTPD = 0.485

2000/(2,3)AQL = 0.0408 LTPD = 0.266

- - - - - -

0.025% 200/(0,1) AQL = 0.0256 LTPD = 1.14

1250/(1,2) AQL = 0.0285 LTPD = 0.311

- - - - - - -

0.015% 315/(0,1) AQL = 0.0163 LTPD = 0.728

2000/(1,2) AQL = 0.0178 LTPD = 0.194

- - - - - - -

0.01% 500/(0,1) AQL = 0.0103 LTPD = 0.459

- - - - - - - -

0.0065% 800/(0,1) AQL = 0.00644 LTPD = 0.287

- - - - - - - -

0.004% 1250/(0,1) AQL = 0.00415 LTPD = 0.184

- - - - - - - -

0.0025% 2000/(0,1) AQL = 0.00253 LTPD = 0.115

- - - - - - - -

Page 241: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -35

The real issue is not SPC versus acceptance sampling; it is how to combine the two. Both techniques require routine product inspections; the trick is to use the same data for both purposes. When variables sampling is used--that is, when the data consist of actual measurements such as seal strength, fill volume, or flow rate--data can be combined on a single acceptance control chart. Figure 3 provides an example of such a chart containing fill-volume data. The inside pair of limits, UCL and LCL, are the control limits. A point falling outside these limits signals that the process is off target and that corrective action is required. The outside pair of limits, UAL and LAL, are the acceptance limits. A lot whose sample falls outside these limits is rejected. In the figure, lot 13 is outside the control limits but inside the acceptance limits, which indicates that the process has shifted. Corrective action on the process is required to maximize the chance that future products will be good; however, no action is required on the product lot. Rejecting whenever a point exceeds the control limits can result in the rejection of perfectly good lots. Similarly, it is wasteful to wait until the acceptance limits are exceeded before taking corrective action on the process. Therefore, separate limits for process and product actions are required. Such limits are also frequently called action limits, warning limits, and alert limits. No matter what the name, however, if the result of exceeding a limit is to act on the process, the limit is serving the purpose of a control limit; if action is instead taken on the product, the limit is serving as an acceptance limit.

Figure 3: Acceptance Control Chart

If attributes sampling is performed, the data must be handled much differently, and care must be taken in implementing SPC so that the resulting change is not illusionary. Consider, for example, a packing operation that inspects for missing parts using the single sampling plan n=13 and a=0. Whenever a lot is rejected, an attempt is made to fix the process. Historically, the process has averaged around 0.2% defective. When management decides to implement SPC, a p-chart of the inspection data is constructed as shown in Figure 4. The upper control limit is 3.92%, and samples with one or more defectives exceed this control limit, triggering attempts to fix the process and rejection of recent product. The company can now state truthfully that SPC is used, but in reality nothing has changed--the same data are collected and the same actions taken. A better approach is to continue acceptance sampling as before and, because this does not protect against a gradual increase in the process average, to analyze the resulting data for trends. Figure 5 shows a p-chart of the same data, but with the data from each day combined. This chart indicates that a change occurred between days 5 and 6; this change is not so apparent in Figure 4.

Neither SPC or acceptance testing can detect a problem before defectives are produced. However, by accumulating data over time, attribute control charts can indicate small changes in the process average that acceptance sampling will not reveal. Used in combination, sampling plans provide immediately protection against major failures while control charts protect against minor sustained problems.

Page 242: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -36

Figure 4: p-Chart of Inspection Results

Figure 5: Daily p-Chart

REDUCING INSPECTION COSTSTwo sampling plans can have the same AQL and LTPD and nearly equivalent OC curves. When they are followed, the same percentage of good lots will be accepted and the same percentage of bad lots rejected. The quality of the products released to customers will be the same, as will the reject and scrap rates. From a regulatory point of view, the two sampling plans are substantially equivalent. However, one of these plans may be less costly to use.

Consider an example.The ANSI Guideline for Gamma Sterilization provides procedures for establishing and monitoring radiation dosages. One procedure is a quarterly audit of the dosage that requires the sterilization of test units at a lower dosage than is actually used for the product. The test dose is selected to give an expected positive rate of 1%. (A positive is a unit that tests nonsterile.) For each audit, an initial sample of 100 units is tested. If two or fewer positives are found, the process has passed the audit; in the event of three or four positives, one retest can be performed. This quarterly audit procedure has an AQL of 1.50% and an LTPD of 5.55%. An alternative to this procedure is to test 50 samples, passing on zero positives and failing on four or more positives. In the event of 1 to 3 positives, a second sample of 100 units is tested. The audit is considered passed only if the cumulative number of positives in the 150 units is four or less. This double sampling plan has an AQL of 1.36% and LTPD of 5.73%.

Page 243: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -37

Figure 6: OC Curves of the ANSI Quarterly Audit Sampling Plan and an Alternative Plan for Monitoring Sterilization Dosage

The OC curves of both procedures are nearly identical, as shown in Figure 6. Indeed, these two sampling plans are substantially equivalent procedures, except for the number of units tested. Figure 7 shows average sample number (ASN) curves for the two plans. If the positive rate is 0.5%, the alternative procedure requires an average of 70 units compared to an average of 102 for the ANSI quarterly audit procedure. If the alternative plan is used to destructively test expensive medical devices, this difference can mean a sizable savings.

Figure 7: ASN Curves of Audit Sampling Plan and Alternative

For any sampling plan, its AQL and LTPD can be found and then other plans providing equivalent protection can be identified. ANSI Z1.4 provides tables of double and multiple sampling plans that match its single sampling plans, and tables of matching double sampling plans, quick switching systems, and variables sampling plans are available for the single sampling plans given in Table I of this article.

3 Single sampling plans are the simplest to use, but require the largest number of samples.

Although they are more complicated, the other types of sampling plans can reduce the number of

Page 244: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

Sampling Plan -38

samples tested. For destructive tests of expensive products, the number of units tested is the prime consideration, and the many alternatives to single sampling plans should be investigated.

CONCLUSIONAcceptance sampling is one of the oldest techniques used for quality control, yet it remains poorly understood and misconceptions regarding its procedures and terminology are widely held. Acceptance sampling does not have to be complicated. Your company can optimize its procedures by remembering this list of principles:

The protection level provided by a sampling plan is described by what it accepts -- its AQL--and what it rejects-- its LTPD.

Selecting a statistically valid sampling plan requires stating the objective of the inspection, selecting the appropriate AQL and LTPD, and then choosing a sampling plan that provides the desired protection.

Companies must know the AQLs and LTPDs of all their sampling plans. It doesn't matter whether a sampling plan comes from MIL-STD-105E or some other source, and the protection provided by a plan does not depend on the lot size; it's the AQL and LTPD that reveal what protection the sampling plan provides.

SPC cannot serve as a replacement for acceptance sampling. Instead, these two techniques should be combined by using the same data to control the process and to make product disposition decisions.

Sampling plans with the same AQL and LTPD are substantially equivalent procedures, so costs can sometimes be reduced by using equivalent double, multiple, or variables sampling plans as alternatives to single sampling plans.

REFERENCES1. Sampling Procedures and Tables for Inspection by Attributes, MIL-STD-105E, Washington D.C.

, U.S. Government Printing Office, 1989 . 2. Sampling Procedures and Tables for Inspection by Attributes, ANSI/ASQC Z1.4, Milwaukee,

WI, American Society for Quality Control, 1981. 3. Taylor, W A, Guide to Acceptance Sampling, Libertyville, IL, Taylor Enterprises, 1992. (Software

is supplied with this book.) 4. Schilling, E G, Acceptance Sampling in Quality Control, New York City, Marcel Dekker, 1982. 5. Guideline for Gamma Radiation Sterilization, ANSI/AAMI ST32-1991, Arlington, VA, Association

for the Advancement of Medical Instrumentation, 1992.

Page 245: Quality Course 2
Page 246: Quality Course 2
Page 247: Quality Course 2
Page 248: Quality Course 2
Page 249: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmobi E

QFD -1

PRODUCT PLANNING AND REQUIREMENTS

DEFINITION USING QFD

Successful products are ones that meet customer needs, are innovative and that offer value. Sounds simple! How do you make that happen?

INTRODUCTION

Quality must be designed into the product, not inspected into it. Quality can be defined as meeting customer needs and providing superior value. This focus on satisfying the customer's needs places an emphasis on techniques such as Quality Function Deployment to help understand those needs and plan a product to provide superior value.

Quality Function Deployment (QFD) is a structured approach to defining customer needs or requirements and translating them into specific plans to produce products to meet those needs. The "voice of the customer" is the term to describe these stated and unstated customer needs or requirements. The voice of the customer is captured in a variety of ways: direct discussion or interviews, surveys, focus groups, customer specifications, observation, warranty data, field reports, etc. This understanding of the customer needs is then summarized in a product planning matrix or "house of quality". These matrices are used to translate higher level "what's" or needs into lower level "how's" - product requirements or technical characteristics to satisfy these needs.

While the Quality Function Deployment matrices are a good communication tool at each step in the process, the matrices are the means and not the end. The real value is in the process of communicating and decision-making with QFD. QFD is oriented toward involving a team of people representing the various functional departments that have involvement in product development: Marketing, Design Engineering, Quality Assurance, Manufacturing/ Manufacturing Engineering, Test Engineering, Finance, Product Support, etc. The active involvement of these departments can lead to balanced consideration of the requirements or "what's" at each stage of this translation process and provide a mechanism to communicate hidden knowledge - knowledge that is known by one individual or department but may not otherwise be communicated through the organization. The structure of this methodology helps development personnel understand essential requirements, internal capabilities, and constraints and design the product so that everything is in place to achieve the desired outcome - a satisfied customer. Quality Function Deployment helps development personnel maintain a correct focus on true requirements and minimizes misinterpreting customer needs. As a result, QFD is an effective communications and a quality planning tool.

CAPTURING THE VOICE OF THE CUSTOMER

The process of capturing the voice of the customer is described in the papers on Product Definition and Steps for Performing QFD. It is important to remember that there is no one monolithic voice of the customer. Customer voices are diverse. In consumer markets, there are a variety of different needs. Even within one buying unit, there are multiple customer voices (e.g., children versus parents). This applies to industrial and government markets as well. There are even multiple customer voices within a single organization: the voice of the procuring organization, the voice of the user, and the voice of the supporting or maintenance organization. These diverse voices must be considered, reconciled and balanced to develop a truly successful product. One technique to accomplish this is to use multiple columns for different priority ratings associated with each customer voice in the product planning matrix.

Quality Function Deployment requires that the basic customer needs are identified. Frequently, customers will try to express their needs in terms of "how" the need can be satisfied and not in terms of "what" the need is. This limits consideration of development alternatives. Development and marketing personnel should ask "why" until they truly understand what the root need is. Breakdown general requirements into more specific requirements by probing what is needed.

Once customer needs are gathered, they then have to be organized. The mass of interview notes, requirements documents, market research, and customer data needs to be distilled into a handful of statements that express key customer needs. Affinity diagramming is a useful tool to assist with this effort. Brief statements which capture key customer requirements are transcribed onto cards. A data dictionary which describes these statements of need are prepared to avoid any misinterpretation. These cards are organized into logical groupings or related needs. This will make it easier to identify any redundancy and serves as a basis for organizing the customer needs for the first QFD matrix.

In addition to "stated" or "spoken" customer needs, "unstated" or "unspoken" needs or opportunities should be identified. Needs that are assumed by customers and, therefore not verbalized, can be identified through preparation of a function tree. These needs normally are not included in the QFD matrix, unless it is important to maintain focus on one or more of these needs. Excitement opportunities (new capabilities or unspoken needs that will cause customer excitement) are identified through the voice of the engineer, marketing, or customer support representative. These can also be identified by observing customers use or maintain products and recognizing opportunities for improvement.

Page 250: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmobi E

QFD -2

QFD METHODOLOGY FLOW

The basic Quality Function Deployment methodology involves four basic phases that occur over the course of the product development process. During each phase one or more matrices are prepared to help plan and communicate critical product and process planning and design information. This QFD methodology flow is represented below.

PRODUCT PLANNING USING QFD

Once customer needs are identified, preparation of the product planning matrix or "house of quality" can begin. The sequence of preparing the product planning matrix is as follows:

1. Customer needs or requirements are stated on the left side of the matrix as shown below. These are organized by category based on the affinity diagrams. Insure the customer needs or requirements reflect the desired market segment(s). Address the unspoken needs (assumed and excitement capabilities). If the number of needs or requirements exceeds twenty to thirty items, decompose the matrix into smaller modules or subsystems to reduce the number of requirements in a matrix. For each need or requirement, state the customer priorities using a 1 to 5 rating. Use ranking techniques and paired comparisons to develop priorities.

2. Evaluate prior generation products against competitive products. Use surveys, customer meetings or focus groups/clinics to obtain feedback. Include competitor's customers to get a balanced perspective. Identify price points and market segments for products under evaluation. Identify warranty, service, reliability, and customer complaint problems to identify areas of improvement. Based on this, develop a product strategy. Consider the current strengths and weaknesses relative to the competition? How do these strengths and weaknesses compare to the customer priorities? Where does the gap need to be closed and how can this be done - copying the competition or using a new approach or technology? Identify opportunities for breakthrough's to exceed competitor's capabilities, areas for improvement to equal competitors capabilities, and areas where no improvement will be made. This strategy is important to focus development efforts where they will have the greatest payoff.

3. Establish product requirements or technical characteristics to respond to customer requirements and organize into related categories. Characteristics should be meaningful, measurable, and global. Characteristics should be stated in a way to avoid implying a particular technical solution so as not to constrain designers.

4. Develop relationships between customer requirements and product requirements or technical characteristics. Use symbols for strong, medium and weak relationships. Be sparing with the strong relationship symbol. Have all customer needs or requirement been addressed? Are there product requirements or technical characteristics stated that don't relate to customer needs?

5. Develop a technical evaluation of prior generation products and competitive products. Get access to competitive products to perform product or technical benchmarking. Perform this evaluation based on the defined product requirements or technical characteristics. Obtain other relevant data such as warranty or service repair occurrences and costs and consider this data in the technical evaluation.

6. Develop preliminary target values for product requirements or technical characteristics.

7. Determine potential positive and negative interactions between product requirements or technical characteristics using symbols for strong or medium, positive or negative relationships. Too many positive interactions suggest potential redundancy in "the critical few" product requirements or technical characteristics.

Page 251: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmobi E

QFD -3

Focus on negative interactions - consider product concepts or technology to overcome these potential tradeoff's or consider the tradeoff's in establishing target values.

8. Calculate importance ratings. Assign a weighting factor to relationship symbols (9-3-1, 4-2-1, or 5-3-1). Multiply the customer importance rating by the weighting factor in each box of the matrix and add the resulting products in each column.

9. Develop a difficulty rating (1 to 5 point scale, five being very difficult and risky) for each product requirement or technical characteristic. Consider technology maturity, personnel technical qualifications, business risk, manufacturing capability, supplier/subcontractor capability, cost, and schedule. Avoid too many difficult/high risk items as this will likely delay development and exceed budgets. Assess whether the difficult items can be accomplished within the project budget and schedule.

10. Analyze the matrix and finalize the product development strategy and product plans. Determine required actions and areas of focus. Finalize target values. Are target values properly set to reflect appropriate tradeoff's? Do target values need to be adjusted considering the difficulty rating? Are they realistic with respect to the price points, available technology, and the difficulty rating? Are they reasonable with respect to the importance ratings? Determine items for further QFD deployment. To maintain focus on "the critical few", less significant items may be ignored with the subsequent QFD matrices. Maintain the product planning matrix as customer requirements or conditions change.

One of the guidelines for successful QFD matrices is to keep the amount of information in each matrix at a manageable level. With a more complex product, if one hundred potential needs or requirements were identified, and these were translated into an equal or even greater number of product requirements or technical characteristics, there would be more than 10,000 potential relationships to plan and manage. This becomes an impossible number to comprehend and manage. It is suggested that an individual matrix not address more than twenty or thirty items on each dimension of the matrix. Therefore, a larger, more complex product should have its customers needs decomposed into hierarchical levels.

To summarize the initial process, a product plan is developed based on initial market research or requirements definition. If necessary, feasibility studies or research and development are undertaken to determine the feasibility of the product concept. Product requirements or technical characteristics are defined through the matrix, a business justification is prepared and approved, and product design then commences.

CONCEPT SELECTION AND PRODUCT DESIGN

Once product planning is complete, a more complete specification may be prepared. The product requirements or technical characteristics and the product specification serve as the basis for developing product concepts. Product benchmarking, brainstorming, and research and development are sources for new product concepts. Once concepts are developed, they are analyzed and evaluated. Cost studies and trade studies are performed. The concept selection matrix can be used to help with this evaluation process.

Page 252: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmobi E

QFD -4

The concept selection matrix shown below lists the product requirements or technical characteristics down the left side of the matrix.

These serve as evaluation criteria. The importance rating and target values (not shown) are also carried forward and normalized from the product planning matrix. Product concepts are listed across the top. The various product concepts are evaluated on how well they satisfy each criteria in the left column using the QFD symbols for strong, moderate or weak. If the product concept does not satisfy the criteria, the column is left blank. The symbol weights (5-3-1) are multiplied by the importance rating for each criteria. These weighted factors are then added for each column. The preferred concept will have the highest total. This concept selection technique is also a design synthesis technique. For each blank or weak symbol in the preferred concept's column, other concept approaches with strong or moderate symbols for that criteria are reviewed to see if a new approach can be synthesized by borrowing part of another concept approach to improve on the preferred approach.

Based on this and other evaluation steps, a product concept is selected. The product concept is represented with block diagrams or a design layout. Critical subsystems, modules or parts are identified from the layout. Criticality is determined in terms of effect on performance, reliability, and quality. Techniques such as fault tree analysis or failure modes and effects analysis (FMEA) can be used to determine criticality from a reliability or quality perspective.

The subsystem, assembly, or part deployment matrix is then prepared. The process leading up to the preparation of the deployment matrix is depicted below.

The product requirements or technical characteristics defined in the product planning matrix become the "what's" that are listed down the left side of the deployment matrix along with priorities (based on the product planning matrix importance ratings) and target values. The deployment matrix is prepared in a manner very similar to the product planning matrix. These product requirements or technical characteristics are translated into critical subsystem, assembly or part characteristics. This translation considers criticality of the subsystem, assembly or parts as well as their characteristics from a performance perspective to complement consideration of criticality from a quality and reliability perspective. Relationships are established between product requirements or technical characteristics and the critical

Page 253: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmobi E

QFD -5

subsystem, assembly or part characteristics. Importance ratings are calculated and target values for each critical subsystem, assembly or part characteristic are established. An example of a part/assembly deployment matrix is shown:

PROCESS DESIGN

Quality Function Deployment continues this translation and planning into the process design phase. A concept selection matrix can be used to evaluate different manufacturing process approaches and select the preferred approach. Based on this, the process planning matrix shown below is prepared.

Again, the "how's" from the higher level matrix (in this case the critical subsystem, assembly or part characteristics) become the "what's" which are used to plan the process for fabricating and assembling the product. Important processes and tooling requirements can be identified to focus efforts to control, improve and upgrade processes and equipment. At this stage, communication between Engineering and Manufacturing is emphasized and tradeoff's can be made as appropriate to achieve mutual goals based on the customer needs.

In addition to planning manufacturing processes, more detailed planning related to process control, quality control, set-up, equipment maintenance and testing can be supported by additional matrices. The following provides an example of a process/quality control matrix.

Page 254: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmobi E

QFD -6

The process steps developed in the process planning matrix are used as the basis for planning and defining specific process and quality control steps in this matrix.

The result of this planning and decision-making is that Manufacturing focuses on the critical processes, dimensions and characteristics that will have a significant effect on producing a product that meets customers needs. There is a clear trail from customer needs to the design and manufacturing decisions to satisfy those customer needs. Disagreements over what is important at each stage of the development process should be minimized, and there will be greater focus on "the critical few" items that affect the success of the product.

QFD PROCESS

Quality Function Deployment begins with product planning; continues with product design and process design; and finishes with process control, quality control, testing, equipment maintenance, and training. As a result, this process requires multiple functional disciplines to adequately address this range of activities. QFD is synergistic with multi-function product development teams. It can provide a structured process for these teams to begin communicating, making decisions and planning the product. It is a useful methodology, along with product development teams, to support a concurrent engineering or integrated product development approach .Quality Function Deployment, by its very structure and planning approach, requires that more time be spent up-front in the development process making sure that the team determines, understands and agrees with what needs to be done before plunging into design activities. As a result, less time will be spent downstream because of differences of opinion over design issues or redesign because the product was not on target. It leads to consensus decisions, greater commitment to the development effort, better coordination, and reduced time over the course of the development effort.

QFD requires discipline. It is not necessarily easy to get started with. The following is a list of recommendations to facilitate initially using QFD.

Obtain management commitment to use QFD.

Establish clear objectives and scope of QFD use. Avoid first using it on a large, complex project if possible. Will it be used for the overall product or applied to a subsystem, module, assembly or critical part? Will the complete QFD methodology be used or will only the product planning matrix be completed?

Establish multi-functional team. Get an adequate time commitment from team members.

Obtain QFD training with practical hands-on exercises to learn the methodology and use a facilitator to guide the initial efforts.

Schedule regular meetings to maintain focus and avoid the crush of the development schedule overshadowing effective planning and decision-making.

Avoid gathering perfect data. Many times significant customer insights and data exist within the organization, but they are in the form of hidden knowledge - not communicated to people with the need for this information. On the other hand, it may be necessary to spend additional time gathering the voice of the customer before beginning QFD. Avoid technical arrogance and the belief that company personnel know more than the customer.

Quality Function Deployment is an extremely useful methodology to facilitate communication, planning, and decision-making within a product development team. It is not a paperwork exercise or additional documentation that must be completed in order to proceed to the next development milestone. It not only brings the new product closer to the intended target, but reduces development cycle time and cost in the process

Page 255: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmobi E

QFD -7

The following six-step approach:

a. Product strategy must be established that provides a framework for properly selecting markets and product ideas. Customers and potential customers are then identified. We would work with the executive team to assess markets, competition, competitive strengths, and product lines. We would map company position against competitors in various dimensions to provide insights and help develop strategy.

b. Product screening evaluates product concepts and marketplace needs against the product strategy objectives. We would help establish a portfolio management and pipeline management process including screening and feasibility criteria. For product ideas that meet screening criteria, we would help organize and conduct preliminary feasibility studies.

c. Team & project organization are needed to initiate a project. We can help with this and provide team-building training as required.

d. Customer needs data collection uses a variety of methods to gather the "voice of the customer". Depending upon the nature of the product and the markets, these methods include: focus groups, customer meetings, surveys and market research, and product support feedback systems. We would help establish these systems and processes. We would help identify required data and areas of investigation and would develop interview or discussion guides as required. We would participate in and facilitate meetings and focus groups as required.

e. Quality function deployment (QFD) training would be provided if development personnel lacked an understanding of this useful methodology. We believe QFD is an important tool to develop an understanding of customer needs among the product development team members and deploy the "voice of the customer" in a structured manner throughout the development process (see Customer Focused Development with QFD). We would conduct two- or three-day workshop which provides hands-on training using a series of exercises to develop practical skills in applying QFD

f. QFD facilitation is then provided to help guide development personnel through the process of product planning using QFD. Depending on the complexity of the product and the amount of data that was available, we would facilitate an initial QFD planning session that is typically several days in length. If data was lacking, we would assign tasks to obtain the needed information and schedule subsequent meetings as required. The QFD facilitation would be divided into planning phases that would occur over the course of the development cycle:

1.) product and subsystem planning and concept selection;

2.) assembly and part deployment;

3.) process planning; and

4.) process control and quality control planning.

What is Quality Function Deployment The origin of QFD...

QFD originated in the 1960's at the Mitsubishi Kobe shipyards in Japan. Initially it was a process which involved 'Quality Tables' and was used successfully in the requirements capture of components that make up supertankers. The concept of Quality Function Deployment was an evolutionary move from 'Quality Tables' by Dr Yoji Akao who pioneered the process. The first article on QFD appeared in 1972 entitled "Development & Quality Assurance of New Products: A System of Quality Deployment" in the monthly magazine 'Standardisation and Quality Control'. In 1978 the process was published as a paperback entitled 'QFD: An approach to Total Quality Control'. QFD was formally introduced to the United States in 1983 by Furukawa, Kogure and Akao during a four-day seminar to Quality Assurance Managers.

In 1984 Ford USA was introduced to the QFD process - One year later a project was set up with Ford body and assembly and its suppliers. In 1987 The Budd Company and Kelsey-Hayes, both Ford suppliers developed the first case study on QFD outside Japan. In parallel with this Bob King published his book (1987) entitled 'Better designs in half the time: Implementing QFD in America'. Soon after John Hauser and Don Clausing published their article "The House of Quality" in the Harvard Business Review. This article was the catalyst that sparked the real interest in QFD - and because QFD is not a proprietary process, it soon became used in ever widening circles with many followers across the globe.

What is Quality Function Deployment ?

'QFD is a methodology used for structured product planning and development - it enables the product development team to clearly specify the customer's needs and wants against how it is to be achieved. Each proposed feature is then systematically evaluated in terms of its impact within the overall product design.' Quality Functional Deployment (QFD) is a method that promotes structured product planning and development - enabling the product development team to clearly specify and evaluate the customers needs and wants against how it could measured (CTQ) and then achieved in the form of a solution.

The methodology takes the design team through the concept, creation and realisation phases of a new product with absolute focus. QFD also helps define what the end user is really looking for in the way of market driven features and

Page 256: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmobi E

QFD -8

benefits - it lists customer requirements, in the language of the Customer and helps you translate these requirements into appropriate new product characteristics.

QFD - a powerful tool with a misleading title...

The word 'Quality' in QFD has led to much confusion and sadly has not helped to promote the methodology to the level it deserves. As a result Organisations usually get introduced to QFD via their own quality departments. Whilst Quality professionals have a role in the process of QFD, Marketing, Development and Manufacturing play a much more vital role.

The House of Quality...

The QFD process involves constructing one or more matrices - the first of which is entitled 'The House of Quality'. The matrix displays Customers needs and wants (What the Customer wants) against the proposed technical solution (How do we achieve it). Customer weightings are applied to prioritise the most important features and a relationship matrix is used to evaluate interdependencies between the 'whats' and the 'Hows'. Any technical interdependencies are further evaluated by the correlation matrix above the 'Hows'. The results of the relationship matrix, in turn, highlight the most important elements of the new product. In addition to this competitor information can then evaluated for both the customer 'Wants' and technical features.

In addition to the 'House of Quality' further matrices can be produced throughout the design process. The house of quality is turned into a street of quality in which the outputs from one QFD process become the input of another. This provides a process where market requirements are systematically and rigorously steered from product definition through development and into manufacturing...

SummaryThe use of QFD can help you identify design objectives that reflect the needs of real Customers. Identifying design objectives from a customers point of view ensures that customers interests and values is created in the phases of the product innovation process. It can also promote an evolutionary approach to product innovation by carefully evaluating from both market and customer perspectives, the performance of preceding products.

QFD primary objectives:

Keeping a Customer focus.

Product Development Specifications - Requirements Capture.

Reduction in the product development cycle.

To increase Customer satisfaction.

Page 257: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmobi E

QFD -9

QFD promotes the following benefits:

Customer driven products.

Reduced time to market - Up to 50% reduction.

Improved Communications throughout the product development process.

Improved cross functional communications.

30 to 50% reduction in engineering / design changes.

30 to 50% in design cycle time.

20 to 60% reduction in production start-up costs.

50% reduction in warranty claims.

Increased Customer satisfaction.

Focus on the Customer...

Many new products fail because of lack of customer focus. QFD helps the organisation focus on genuine customer requirements. It also provides the customer with a route into the development process. QFD forces you to utilise innovation effort at real customers instead of second guessing their requirements.

Reduced Time to Market...

QFD is a highly structured process with all product features being evaluated for their net benefit to the customer. The process by default ensures that all technical features are evaluated pragmatically such that only technical features included within the proposed new product are evaluated.

Reduced Cost...

QFD supports the identification of the product characteristics that customers rate as less important. High performances with respect to these characteristics, when compared to those of competing products, provide opportunities for cost reduction.

To thrive in business, designing products and services that excite the customer and creating new markets is a critical strategy. And while growth can be achieved in many different ways--selling through different channels, selling more to existing customers, acquisitions, geographic expansion--nothing energizes a company more than creating new products or upgrading existing products to create customer delight.

Quality Function Deployment (QFD) is a methodology for building the "Voice of the Customer" into product and service design. It is a team tool which captures customer requirements and translates those needs into characteristics about a product or service.

The origins of QFD come from Japan. In 1966, the Japanese began to formalize the teachings of Yoji Akao on QFD. Since its introduction to America, QFD has helped to transform the way businesses: - plan new products- design product requirements- determine process characteristics- control the manufacturing process- document already existing product specifications

QFD uses some principles from Concurrent Engineering in that cross functional teams are involved in all phases of product development. Each of the four phases in a QFD process uses a matrix to translate customer requirements from initial planning stages through production control.

A QFD process involves:

Phase 1 Product Planning

Phase 2 Product Design

Phase 3 Process Planning

Phase 4 Process Control

Each phase, or matrix, represents a more specific aspect of the product's requirements. Binary relationships between elements are evaluated for each phase. Only the most important aspects from each phase are deployed into the next matrix.

Phase 1-Led by the marketing department, Phase 1, or product planning, is also called The House of Quality. Many organizations only get through this phase of a QFD process. Phase 1 documents customer requirements, warranty data, competitive opportunities, product measurements, competing product measures, and the technical ability of the organization to meet each customer requirement. Getting good data from the customer in Phase 1 is critical to the success of the entire QFD process.

Page 258: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmobi E

QFD -10

Phase 2- Phase 2 is led by the engineering department. Product design requires creativity and innovative team ideas. Product concepts are created during this phase and part specifications are documented. Parts that are determined to be most important to meeting customer needs are then deployed into process planning, or Phase 3.

Phase 3-Process planning comes next and is led by manufacturing engineering. During process planning, manufacturing processes are flowcharted and process parameters (or target values) are documented.

Phase 4-And finally, in the production planning, performance indicators are created to monitor the production process, maintenance schedules, and skills training for operators. Also, in this phase decisions are made as to which process poses the most risk and controls are put in place to prevent failures. The quality assurance department in concert with manufacturing leads Phase 4.

QFD is a systematic means of ensuring that customer requirements are accurately translated into relevant technical descriptors throughout each stage of product development. Meeting or exceeding customer demands means more than just maintaining or improving product performance. It means building products that delight customers and fulfill their unarticulated desires. Companies growing into the 21st century will be enterprises that foster the needed innovation to create new markets.

Summary

QFD is originally from the manufacturing industry and was developed in 1966 in Japan. It is a quality oriented process which attempts to prioritize customer needs and wants and translate these needs and wants into technical requirements and specifications in order to deliver a product or service by focusing on customer satisfaction. The main task is to transform the voice of the customer into a prioritized set of actionable targets. Although QFD is normally used for manufacturing purposes, major software organizations are adopting QFD and applying it to the software development environment. This is termed SQFD (software QFD). Some benefits of QFD include reducing time to market, reducing design changes, decreased design and manufacturing cost, improved quality and possibly the most important, increased customer satisfaction.

The QFD process consists of four phases: Product Planning, Parts Deployment, Process Planning and Production Planning called the Causing Four Phase Model. Three QFD matrices and a planning production table translate customer requirements into production equipment settings. The first phase is the House of Quality, and the HOWS (technical requirements) become the WHATS of the phase, Parts Deployment. The HOWs of this stage (Parts Characteristics) become the WHATs of the third stage, Process Planning. Finally, the HOWs of this stage (key process operations) become the WHATs of the Production Planning stage (The last stage).

The House of Quality is the tool used through the process to prioritize customer requirements into product features. The House of Quality consists of six steps: Customer Requirements, Planning Matrix, Technical Requirements, Inter-relationships, Roof and Targets. The HOQ takes in structured requirements as input and outputs design targets for prioritized requirements.

When QFD is applied to Software Engineering, the Product Planning phase is expanded slightly. This model applies QFD to focus on the needs of building software, specifically focusing on requirements engineering. The most important aspect of SQFD is customer requirements. These requirements are mapped into technical requirements in Phase 1 and prioritized based on customer input.

Page 259: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmobi E

QFD -11

Correlation matrix

In any product there are bound to be interactions between different design requirements. These correlation's are shown in the triangular matrix at the top of the QFD 1 chart (the "roof" on the "house of quality"). Relationships are identified as positive or negative, strong or weak.

Negative relationships are particularly important, as they represent areas where trade-offs are needed. If these are not identified and resolved early in the process, there is a danger that they will lead to unfulfilled requirements. Some of these trade-offs may cross departmental or even company boundaries. This should not present problems in a proper team environment, but if designers are working in isolation, unresolved requirement conflicts can lead to repeated and unproductive iterations.

Page 260: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmobi E

QFD -12

Improvement direction

In some cases the target value for a design requirement is the optimum measurement for that requirement. In other cases it is an acceptable value, but a higher or lower value would be even better. You would not normally want to expend additional design effort to increase the weight of an aircraft component of the initial design came out below the target weight

This row on the chart is used to identify the improvement direction for the design requirement, especially if this is not immediately obvious

Target value

Each design requirement will have a target value. These are entered into TeamSET using the same form as the requirement, and are displayed in this part of the chart

Design requirements

Design requirements are descriptions, in measurable terms, of the features the product or service needs to exhibit in order to satisfy the customer requirements.

Requirement groupings Once the list of design requirements is complete, it must be rearranged and the requirements amalgamated to bring them all to the same level of detail and make the matrix more manageable. TeamSET allows you to structure the requirements with as many levels of headings and sub-headings as you need. Notes may be added to hold more detailed requirements that have been grouped into a single design requirement.

Customer requirements

Whore the customers?

The first step in any QFD exercise is to identify the "customers" or "influencers". Any individual who will buy, use or work with the product should be considered a customer. Thus the purchaser, the user (if different from the purchaser), the retailer, the service engineer, and the manufacturing plant where the product is made are all customers for the design. If you supply an international market place remember that customers from different geographic locations or from different ethnic or cultural backgrounds may have differing requirements for your product. Data collection

Page 261: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmobi E

QFD -13

There are many possible sources for data relating to customer requirements. These include:

Customers (go and ask them!!!)

Existing market research

Specially commissioned market research

Customer complaints

Warranty claims

Rejects

Perhaps the commonest mistake in applying QFD is for the design team or marketing organisation to substitute for the customer. There may appear to be many good reasons for this. Whatever the excuse, the result will only confirm the team's preconceptions. Very little will be learnt from it.

Sorting the customer requirements

For a product or system of more than trivial complexity, the customer requirements collected during the initial phase may amount to several hundred separate "wants" and "needs". To make these comparable and to keep the matrix a manageable size, they must be grouped into related topics, and combined to give requirements at the same level of detail. TeamSET allows you to structure the requirements with as many levels of headings and sub-headings as you need. Notes may be added to the requirements to hold more detailed "wants" that have been grouped into a single customer requirement.

The relative importance of each of the customer requirements is assessed by the customer and entered into the "Importance rating" column on the chart.

Customer importance rating

Each customer requirement is assessed by the customer on a scale of 1 to 5, where 5 is the most important. It is important that the full range is used if the design requirements are to be prioritised correctly.

Relationship matrix

The relationship between each customer requirement and every design requirement is assessed and entered in the matrix. The symbols shown in on the left are more intuitive and are preferred by many. If you would rather use the ones on the right, it is a simple matter to switch between them

TeamSET symbols Alternative symbol set

For each cell in the matrix, the question should be asked: "Does this design requirement have an influence on our ability to satisfy the customer requirement?" If there is a relationship, its strength should be assessed in terms of "strong", "medium", or "weak". It should be noted that the design requirement may have a negative influence on the customer requirement. This is still a valid relationship, and its strength should be evaluated in the normal way.

Page 262: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmobi E

QFD -14

Customer's competitor rating

The customer's opinion of the competitor products is collected at the same time as the customer requirements. Competitor products are assessed for their ability to satisfy each customer requirement. The data is represented on a scale of one to five, where five indicates that the requirement is totally satisfied. The data may be input numerically or graphically. Results are displayed graphically.

Products are differentiated on the chart by using different symbols and / or colours. The "Competitor" column is graduated from 1 to 5, and a point is plotted for each product against each customer requirement. The points may optionally be connected by lines.

Page 263: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmobi E

QFD -15

Technical competitor assessment

The technical assessment of competitor's products should be based on testing, inspection and strip-down of those products. Assessment should be based on the Design requirements and should be measured in the same units as the target values for the Design requirements. Other tools, such as Design for Assembly (DFA) analysis may prove useful in evaluating competitor's products.

Design requirement absolute importance A major objective of QFD 1 is to derive a prioritised list of design requirements to enable design effort to be focused on areas that will make the maximum contribution to satisfying the customer. To do this it is necessary to assign numerical values to the relationships in the matrix. The values used are:

Weak relationship = 1 Medium relationship = 3 Strong relationship = 9

The numerical values of the relationships are multiplied by the importance weighting for the relevant customer requirement, and the results added for the matrix column associated with each design requirement. The total is displayed in the 'Absolute importance' cell

The design requirements with the highest totals are those which are most closely involved in satisfying the customer's wants and needs. These are the ones that we must get right. The design requirements with low scores are those which are not very important to the customer. These are the ones where we can possibly afford to compromise. The ranking of the design requirements is displayed in the Ranking row on the chart. It can also be displayed and printed graphically on an interactive bar graph.

Design requirement ranking

Design requirements are ranked in order of importance to simplify interpretation of the chart. The ranking can be displayed visibly as an interactive bar graph. This can be used to select the most important requirements to be carried forward to a subsequent QFD chart.

Difficult or new Design requirements that need new features or are difficult to satisfy should be noted here. This will highlight the fact that development work may be required.

Carry forward

When QFD charts are cascaded, design requirements become visible on other charts. the "carry forward" row shows how many other charts each design requirement is used on.

Page 264: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmobi E

QFD -16

EXAMPLE:

Affinity AnalysisA major supplier of digital devices needs to upgrade one of its products, a palm pilot. They found they were losing business because their competitors, in some extent, are already one generation ahead of their current product.

These are requirements based on the voice of customer, along with their importance ratings for this handheld palm pilot Our group discussed the sorted the requirements and ended up with the following grouping with the requirements assigned to one group listed below: Hardware, Ease of Use, Speed, and Cost.

Items sorted due to geographical position Hardware

Non-expensive hardware Cost

Shockproof device Hardware

Small size of hardware Hardware

Items sorted due to numerical value Ease of Use

Reliable test input Ease of Use

Item id with a mixture of numbers and letters Ease of Use

Better possibility to read the display under different conditions Hardware

Platform independence Ease of Use

East data connection between host and device Hardware

Minimize the text input Speed

Short response time when working with the application Speed

Easy to understand and learn Ease of Use

Maximize input by selecting items from lists Speed

Non-expensive development environment Cost

Easy to browse the application Ease of Use

Page 265: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmobi E

QFD -17

House of Quality

EXAMPLE Summary

Using QFD we were to group requirements as a customer would, using Affinity Analysis and take some of those requirements and build a House of Quality with them. The first exercise was very easy to complete. We did not write each requirement out on cards and perform the affinity analysis in the traditional way. Instead we can up with our own groupings and as a group assigned each requirement to a group we thought it belonged in. The list of requirements pertained to a hardware product so we guessed on a majority of the requirements’ groups.

The second exercise was more difficult to do because of the lack of time and understanding of how to fill out the House of Quality. There is a step-by-step process that you work through to fill in a House of Quality and even the simple one that we were trying to do took twice as long as the time given for the exercise. Several aspects of the HoQ require the customer so without once again we guessed at what we felt were realistic values.

Page 266: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmobi E

QFD -18

additional

DESCRIPTION OF THE QFD PROCESS

By Ted Squires

Quality Function Deployment (QFD) is a structured, multi-disciplinary technique for product definition that maximizes value to the customer. The application of the QFD process is an art that varies somewhat from practitioner to practitioner, but the graphic on the next page illustrates a typical application of the QFD process. The graphic shows a concept called the QFD House of Quality (HOQ), a device for organizing the flow of thinking and discussion that leads to finished product specifications. The House of Quality is built by a firm's own multi-disciplinary team under guidance from a trained QFD facilitator (preferably a facilitator with both marketing and technical experience).

Given one or more specific objectives (e.g., a narrow focus such as "optimize engine performance" or a more global focus such as "optimize overall passenger comfort"), the QFD process starts with obtaining customer requirements through market research. These research results are inputs into the House of Quality. Below is a discussion of each of the rooms of the House of Quality and how they are built.

The "Whats" Room: Typically there are many customer requirements, but using a technique called affinity diagramming, the team distills these many requirements into the 20 or 30 most important needs. The affinity diagramming process is critical to the success of QFD in that there is vigorous discussion to reach consensus as to what the customers really meant by their comments. This is a powerful technique for reconciling the different interpretations held by marketing, design engineering or field service. The affinity diagramming process usually takes about one to two solid team days to complete, depending on how narrow or global the objective is. The results from the affinity diagramming are placed into the "Whats" room in the HOQ.

The Importance Ratings and Customer Competitive Assessment Rooms: Marketing and /or the market researcher designs the market research so that the team can use the results as inputs to successfully complete the Importance Ratings and Customer Competitive Assessment rooms. These rooms are located on the matrix where benefit rankings and ratings are assembled for analysis. The Importance Rankings provide the team with a prioritization of customer requirements while the Customer Competitive Assessment allows us to spot strengths and weaknesses in both our product and the competition's products.

Page 267: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmobi E

QFD -19

The "Hows" Room: The next step is the completion of the "Hows" room. In this activity the entire team asks for each "What", "How would we measure product performance which would provide us an indication of customer satisfaction for this specific 'What'?" The team needs to come up with at least one product performance measure, but sometimes the team recognizes that it takes several measures to adequately characterize product performance.

The Relationships Matrix Room: After the "Hows" room has been completed, the team begins to explore the relationships between all "Whats" and all "Hows" as they complete the Relationships Matrix room. During this task the team systematically asks, "What is the relationship between this specific 'how' and this specific 'what'?" "Is there cause and effect between the two?" This is a consensus decision within the group. Based on the group decision, the team assigns a strong, medium, weak or no relationship value to this specific "what/how" pairing. Then the team goes on to the next "what/how" pairing. This process continues until all "what/how" pairings have been reviewed. The technical community begins to assume team leadership in these areas.

The Absolute Score and Relative Score Rooms: Once the Relationships Matrix room has been completed, the team can then move on to the Absolute Score and Relative Score rooms. This is where the team creates a model or hypothesis as to how product performance contributes to customer satisfaction. Based on the Importance Ratings and the Relationship Matrix values, the team calculates the Absolute and Relative Scores. These calculations are the team's best estimate as to which product performance measures ("hows") exert the greatest impact on overall customer satisfaction. Engineering now begins to know where the product has got to measure up strongly in order to beat the competition. The last three rooms receive the most input from the technical side of the team, but total team involvement is still vital.

The Correlation Matrix Room: There are times in many products where customer requirements translate into physical design elements which conflict with one another; these conflicts are usually reflected in the product "hows". The Correlation Matrix room is used to help resolve these conflicts by highlighting those "hows" which have are share the greatest conflict.

For example, let's say that the "how" called "weight" should be minimized for greatest customer satisfaction. At the same time there might be two other "hows" titled "strength" and "power capacity". The customer has expressed preferences that these be maximized. Based on what we know about physics, there may be a conflict in minimizing "weight" and maximizing "strength" and "power capacity". The analysis that takes place in the Correlation Matrix room systematically forces a technical review for all likely conflicts and then alerts the team to either optimize or eliminate these conflicts or consider design alternatives.

The mechanics of the analysis is to review each and every "how" for possible conflict (or symbiosis) against every other "how". As mentioned in the previous sentence sometimes symbiotic relationships between "hows" do surface in this analysis. This analysis also allows the team to capitalize on those symbiotic situations.

The Technical Competitive Assessment Room: This is the room where engineering applies the measurements identified during the construction of the "Hows" room. "Does our product perform better than the competitive product according to the specific measure that we have identified?" Here is where the team tests the hypothesis created in the Relative Score room. It helps the team to confirm that it has created "hows" that make sense, that really do accurately measure characteristics leading to customer satisfaction. Analysis in the Technical Competitive Assessment and Customer Competitive Assessment rooms can also help uncover problems in perception. For example, perhaps the customer wants a car that is fast, so your team comes up with the "how" of "elapsed time in the quarter mile". After comparing performance between your car and the competitor's vehicle, you realize that "you blew the doors off the competitor's old crate". However when you look in the Customer Competitive Assessment room, you see that most of the marketplace perceives the competitor's car as being faster. While you might have chosen one of the correct "hows" to measure performance, it is clear that your single "how" does not completely reflect performance needed to make your car appear faster.

The Target Values Room: The last room of Target Values contains the recommended specifications for the product. These specifications will have been well thought out, reflecting customer needs, competitive offerings and any technical trade-off required because of either design or manufacturing constraints.

The House of Quality matrix is often called the phase one matrix. In the QFD process there is also a phase two matrix to translate finished product specifications into attributes of design (architecture, features, materials, geometry,

Page 268: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmobi E

QFD -20

subassemblies and / or component parts) and their appropriate specifications. Sometimes a phase three matrix is used to attributes of design specifications into manufacturing process specifications (temperature, pressure, viscosity, rpm, etc.).

QFD CONDUCIVE ORGANIZATIONAL ENVIRONMENT

The huge success enjoyed by firms using QFD is balanced by the numerous firms failing to effectively implement it. We have listed several success keys that should enhance the chances of successful implementation:

1. Management must make it clear that QFD is a priority. 2. Set clear priorities for QFD activities. Specifically, management needs to allocate resources

for and insist on execution of market research and Technical Competitive Assessment. 3. Make QFD training available, preferably "just-in-time" to use QFD. 4. Insist that decisions be based upon customer requirements. 5. Understand the terms used in QFD. 6. Insist on cross-functional commitment and participation. 7. Become leaders of QFD rather than managers.

Process Planning is the third of four phases in the QFD process. In this phase, relationships between the prioritized

design attributes from the previous phase and process steps identified during this phase are documented. This is

accomplished by completing the Phase III Process Planning matrix.

The approach used to complete the Planning matrix is similar to the approach taken when completing the Design matrix during the Design Planning phase.

In this phase, the objective is to identify key process steps which will be further analyzed in Phase IV, Production Planning.

Page 269: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmobi E

QFD -21

Matrix Relationships

The relationships between design attributes and process steps are shown in the body of the Planning matrix. The key design attributes from the columns in the Design matrix become the rows of the Planning matrix. Processes applicable to the design attributes are placed in the columns of the Planning matrix. Relationships between column and row entries are shown as follows:

solid circle - strong relationship, open circle - moderate relationship, triangle - weak relationship, blank - no relationship.

Each type of relationship is also assigned a numeric value (e.g., 9,3,1 for strong, moderate, and weak relationships respectively) for further analysis that occurs while prioritizing the process steps.

Design Attribute Importance Weights

The importance weight values for the design attributes are taken from the values previously derived in the Design matrix.

Design Attribute Relative Weights

The importance weights for the design attributes are then translated to a 100-point scale to produce the relative weights for the attributes. The relative weight for each attribute is obtained by dividing the Importance Weight of the attribute by the Sum of all Importance Weights, and then multiplying by 100. For example, importance weights of 100, 50, 30, and 20 would translate to relative weights of 50, 25, 15, and 10 respectively. The sum of the relative weights is always 100.

Process Importance Weights

The following steps are completed to calculate the importance weight for each process step:

For each design attribute that has a relationship with the process step, multiply the design attribute relative weight point value by the numeric value associated with the type of relationship between the design attribute and the process step (e.g., in Relationship Between Customer and Technical Requirements a value of "3" was assigned to a moderate relationship).

Summarize the values obtained in the previous step to determine the importance weight. For example, if there are three design attributes related to the process step, the previous step would have been completed three times and the values obtained would be totaled.

Process Relative Weights

The importance weights for the process steps are then translated to a 100-point scale to produce the relative weights for the processes.

Page 270: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmobi E

QFD -22

EXAMPLE 1:

Page 271: Quality Course 2

Prepared by Haery Sihombing @ IP

sihmobi E

QFD -23

EXAMPLE 2:

Page 272: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

FMEA -1

FMEA

Introduction

Customers are placing increased demands on companies for high quality, reliable products. The increasing capabilities and functionality of many products are making it more difficult for manufacturers to maintain the quality and reliability. Traditionally, reliability has been achieved through extensive testing and use of techniques such as probabilistic reliability modeling. These are techniques done in the late stages of development. The challenge is to design in quality and reliability early in the development cycle.

Failure Modes and Effects Analysis (FMEA) is methodology for analyzing potential reliability problems early in the development cycle where it is easier to take actions to overcome these issues, thereby enhancing reliability through design. FMEA is used to identify potential failure modes, determine their effect on the operation of the product, and identify actions to mitigate the failures. A crucial step is anticipating what might go wrong with a product. While anticipating every failure mode is not possible, the development team should formulate as extensive a list of potential failure modes as possible.

The early and consistent use of FMEAs in the design process allows the engineer to design out failures and produce reliable, safe, and customer pleasing products. FMEAs also capture historical information for use in future product improvement.

Types of FMEA's

There are several types of FMEAs, some are used much more often than others. FMEAs should always be done whenever failures would mean potential harm or injury to the user of the end item being designed. The types of FMEA are:

System - focuses on global system functions Design - focuses on components and subsystems Process - focuses on manufacturing and assembly processes Service - focuses on service functions Software - focuses on software functions

FMEA Usage

Historically, engineers have done a good job of evaluating the functions and the form of products and processes in the design phase. They have not always done so well at designing in reliability and quality. Often the engineer uses safety factors as a way of making sure that the design will work and protected the user against product or process failure. As described in a recent article:

"A large safety factor does not necessarily translate into a reliable product. Instead, it often leads to an overdesigned product with reliability problems."

Failure Analysis Beats Murphey's Law

Mechanical Engineering , September 1993

FMEA's provide the engineer with a tool that can assist in providing reliable, safe, and customer pleasing products and processes. Since FMEA help the engineer identify potential product or process failures, they can use it to:

Develop product or process requirements that minimize the likelihood of those failures. Evaluate the requirements obtained from the customer or other participants in the design

process to ensure that those requirements do not introduce potential failures. Identify design characteristics that contribute to failures and design them out of the system or

at least minimize the resulting effects. Develop methods and procedures to develop and test the product/process to ensure that the

failures have been successfully eliminated. Track and manage potential risks in the design. Tracking the risks contributes to the

development of corporate memory and the success of future products as well.

Page 273: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

FMEA -2

Ensure that any failures that could occur will not injure or seriously impact the customer of the product/process.

Benefits of FMEA

FMEA is designed to assist the engineer improve the quality and reliability of design. Properly used the FMEA provides the engineer several benefits. Among others, these benefits include:

Improve product/process reliability and quality Increase customer satisfaction Early identification and elimination of potential product/process failure modes Prioritize product/process deficiencies Capture engineering/organization knowledge Emphasizes problem prevention Documents risk and actions taken to reduce risk Provide focus for improved testing and development Minimizes late changes and associated cost Catalyst for teamwork and idea exchange between functions

FMEA Timing

The FMEA is a living document. Throughout the product development cycle change and updates are made to the product and process. These changes can and often do introduce new failure modes. It is therefore important to review and/or update the FMEA when:

A new product or process is being initiated (at the beginning of the cycle). Changes are made to the operating conditions the product or process is expected to function

in. A change is made to either the product or process design. The product and process are inter-

related. When the product design is changed the process is impacted and vice-versa. New regulations are instituted. Customer feedback indicates problems in the product or process.

FMEA Procedure

The process for conducting an FMEA is straightforward. The basic steps are outlined below. 1. Describe the product/process and its function. An understanding of the product or process

under consideration is important to have clearly articulated. This understanding simplifies the process of analysis by helping the engineer identify those product/process uses that fall within the intended function and which ones fall outside. It is important to consider both intentional and unintentional uses since product failure often ends in litigation, which can be costly and time consuming.

2. Create a Block Diagram of the product or process. A block diagram of the product/process should be developed. This diagram shows major components or process steps as blocks connected together by lines that indicate how the components or steps are related. The diagram shows the logical relationships of components and establishes a structure around which the FMEA can be developed. Establish a Coding System to identify system elements. The block diagram should always be included with the FMEA form.

3. Complete the header on the FMEA Form worksheet: Product/System, Subsys./Assy., Component, Design Lead, Prepared By, Date, Revision (letter or number), and Revision Date. Modify these headings as needed.

Page 274: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

FMEA -3

4. Use the diagram prepared above to begin listing items or functions. If items are components, list them in a logical manner under their subsystem/assembly based on the block diagram.

5. Identify Failure Modes. A failure mode is defined as the manner in which a component, subsystem, system, process, etc. could potentially fail to meet the design intent. Examples of potential failure modes include:

Corrosion Hydrogen embrittlement Electrical Short or Open Torque Fatigue Deformation Cracking

6. A failure mode in one component can serve as the cause of a failure mode in another component. Each failure should be listed in technical terms. Failure modes should be listed for function of each component or process step. At this point the failure mode should be identified whether or not the failure is likely to occur. Looking at similar products or processes and the failures that have been documented for them is an excellent starting point.

7. Describe the effects of those failure modes. For each failure mode identified the engineer should determine what the ultimate effect will be. A failure effect is defined as the result of a failure mode on the function of the product/process as perceived by the customer. They should be described in terms of what the customer might see or experience should the identified failure mode occur. Keep in mind the internal as well as the external customer. Examples of failure effects include:

Injury to the user Inoperability of the product or process Improper appearance of the product or process Odors Degraded performance Noise

Page 275: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

FMEA -4

Establish a numerical ranking for the severity of the effect. A common industry standard scale uses 1 to represent no effect and 10 to indicate very severe with failure affecting system operation and safety without warning. The intent of the ranking is to help the analyst determine whether a failure would be a minor nuisance or a catastrophic occurrence to the customer. This enables the engineer to prioritize the failures and address the real big issues first.

8. Identify the causes for each failure mode. A failure cause is defined as a design weakness that may result in a failure. The potential causes for each failure mode should be identified and documented. The causes should be listed in technical terms and not in terms of symptoms. Examples of potential causes include:

Improper torque applied Improper operating conditions Contamination Erroneous algorithms Improper alignment Excessive loading Excessive voltage

9. Enter the Probability factor. A numerical weight should be assigned to each cause that indicates how likely that cause is (probability of the cause occuring). A common industry standard scale uses 1 to represent not likely and 10 to indicate inevitable.

10. Identify Current Controls (design or process). Current Controls (design or process) are the mechanisms that prevent the cause of the failure mode from occurring or which detect the failure before it reaches the Customer. The engineer should now identify testing, analysis, monitoring, and other techniques that can or have been used on the same or similar products/processes to detect failures. Each of these controls should be assessed to determine how well it is expected to identify or detect failure modes. After a new product or process has been in use previously undetected or unidentified failure modes may appear. The FMEA should then be updated and plans made to address those failures to eliminate them from the product/process.

11. Determine the likelihood of Detection. Detection is an assessment of the likelihood that the Current Controls (design and process) will detect the Cause of the Failure Mode or the Failure Mode itself, thus preventing it from reaching the Customer. Based on the Current Controls, consider the likelihood of Detection using the following table for guidance.

12. Review Risk Priority Numbers (RPN). The Risk Priority Number is a mathematical product of the numerical Severity, Probability, and Detection ratings:

RPN = (Severity) x (Probability) x (Detection)

The RPN is used to prioritize items than require additional quality planning or action.

13. Determine Recommended Action(s) to address potential failures that have a high RPN. These actions could include specific inspection, testing or quality procedures; selection of different components or materials; de-rating; limiting environmental stresses or operating range; redesign of the item to avoid the failure mode; monitoring mechanisms; performing preventative maintenance; and inclusion of back-up systems or redundancy.

14. Assign Responsibility and a Target Completion Date for these actions. This makes responsibility clear-cut and facilitates tracking.

15. Indicate Actions Taken. After these actions have been taken, re-assess the severity, probability and detection and review the revised RPN's. Are any further actions required?

16. Update the FMEA as the design or process changes, the assessment changes or new information becomes known.

Page 276: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

FMEA -5

FMEA: Failure Mode & Effects Analysis

Page 277: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

FMEA -6

FMEA : Severity, Occurrence, and Detection Definitions

Severity

Hazardous without warning

Very high severity ranking when a potential failure mode effects safe system operation without warning

10

Hazardous with warning

Very high severity ranking when a potential failure mode affects safe system operation with warning

9

Very HighSystem inoperable with destructive failure without compromising safety

8

High System inoperable with equipment damage 7

Moderate System inoperable with minor damage 6

Low System inoperable without damage 5

Very Low System operable with significant degradation of performance 4

Minor System operable with some degradation of performance 3

Very Minor System operable with minimal interference 2

None No effect 1

Probability

PROBABILITY of Failure Failure Prob Ranking

Very High: Failure is almost inevitable >1 in 2 10

1 in 3 9

High: Repeated failures 1 in 8 8

1 in 20 7

Moderate: Occasional failures 1 in 80 6

1 in 400 5

1 in 2,000 4

Low: Relatively few failures 1 in 15,000 3

1 in 150,000 2

Remote: Failure is unlikely <1 in 1,500,000 1

Page 278: Quality Course 2

Prepared by Haery Sihombing @ IP

Sihmo\bi E

FMEA -7

Detectability

Detection Likelihood of DETECTION by Design Control Ranking

Absolute UncertaintyDesign control cannot detect potential cause/mechanism and subsequent failure mode

10

Very RemoteVery remote chance the design control will detect potential cause/mechanism and subsequent failure mode

9

RemoteRemote chance the design control will detect potential cause/mechanism and subsequent failure mode

8

Very LowVery low chance the design control will detect potential cause/mechanism and subsequent failure mode

7

LowLow chance the design control will detect potential cause/mechanism and subsequent failure mode

6

ModerateModerate chance the design control will detect potential cause/mechanism and subsequent failure mode

5

Moderately HighModerately High chance the design control will detect potential cause/mechanism and subsequent failure mode

4

HighHigh chance the design control will detect potential cause/mechanism and subsequent failure mode

3

Very HighVery high chance the design control will detect potential cause/mechanism and subsequent failure mode

2

Almost CertainDesign control will detect potential cause/mechanism and subsequent failure mode

1

Page 279: Quality Course 2

Basic Concepts of FMEA and FMECA

FMEA -8 Prepared by Haery Sihombing @ IP

Sihmo\bi E

Page 280: Quality Course 2

Basic Concepts of FMEA and FMECA

FMEA -9 Prepared by Haery Sihombing @ IP

Sihmo\bi E

Failure Mode and Effects Analysis (FMEA) and Failure Modes, Effects and Criticality Analysis (FMECA) are methodologies designed to identify potential failure

modes for a product or process, to assess the risk associated with those failure modes, to rank the issues in terms of importance and to identify and carry out

corrective actions to address the most serious concerns.

Although the purpose, terminology and other details can vary according to type (e.g. Process FMEA, Design FMEA, etc.), the basic methodology is similar for

all. This article presents a brief general overview of FMEA / FMECA analysis techniques and requirements.

FMEA / FMECA OverviewIn general, FMEA / FMECA requires the identification of the following basic information:

Item(s)

Function(s)

Failure(s)

Effect(s) of Failure

Cause(s) of Failure

Current Control(s)

Recommended Action(s)

Plus other relevant details

Most analyses of this type also include some method to assess the risk associated with the issues identified during the analysis and to prioritize corrective actions. Two common methods include:

Risk Priority Numbers (RPNs)

Criticality Analysis (FMEA with Criticality Analysis = FMECA)

Published Standards and GuidelinesThere are a number of published guidelines and standards for the requirements and recommended reporting format of FMEAs and FMECAs. Some of the main published standards for this type of analysis include SAE J1739, AIAG FMEA-3 and MIL-STD-1629A. In addition, many industries and companies have developed their own procedures to meet the specific requirements of their products/processes. Figure 1 shows a sample Process FMEA in the Automotive

Industry Action Group (AIAG) FMEA-3 format.

Basic Analysis Procedure for FMEA or FMECAThe basic steps for performing an FMEA/FMECA analysis include:

Assemble the team.

Establish the ground rules.

Gather and review relevant information.

Identify the item(s) or process(es) to be analyzed.

Identify the function(s), failure(s), effect(s), cause(s) and control(s) for each item or process to be analyzed.

Page 281: Quality Course 2

Basic Concepts of FMEA and FMECA

FMEA -10 Prepared by Haery Sihombing @ IP

Sihmo\bi E

Evaluate the risk associated with the issues identified by the analysis.

Prioritize and assign corrective actions.

Perform corrective actions and re-evaluate risk.

Distribute, review and update the analysis, as appropriate.

Risk Evaluation MethodsA typical FMEA incorporates some method to evaluate the risk associated with the potential problems identified through the analysis. The two most common

methods, Risk Priority Numbers and Criticality Analysis, are described next.

Risk Priority NumbersTo use the Risk Priority Number (RPN) method to assess risk, the analysis team must:

Rate the severity of each effect of failure.

Rate the likelihood of occurrence for each cause of failure.

Rate the likelihood of prior detection for each cause of failure (i.e. the likelihood of detecting the problem before it reaches the end user or

customer).

Calculate the RPN by obtaining the product of the three ratings:

RPN = Severity x Occurrence x Detection

The RPN can then be used to compare issues within the analysis and to prioritize problems for corrective action.

Criticality AnalysisThe MIL-STD-1629A document describes two types of criticality analysis: quantitative and qualitative. To use the quantitative criticality analysis method, the

analysis team must:

Define the reliability/unreliability for each item, at a given operating time.

Identify the portion of the item’s unreliability that can be attributed to each potential failure mode.

Rate the probability of loss (or severity) that will result from each failure mode that may occur.

Calculate the criticality for each potential failure mode by obtaining the product of the three factors:

Mode Criticality = Item Unreliability x Mode Ratio of Unreliability x Probability of Loss

Calculate the criticality for each item by obtaining the sum of the criticalities for each failure mode that has been identified for the item.

Item Criticality = SUM of Mode Criticalities

To use the qualitative criticality analysis method to evaluate risk and prioritize corrective actions, the analysis team must:

Page 282: Quality Course 2

Basic Concepts of FMEA and FMECA

FMEA -11 Prepared by Haery Sihombing @ IP

Sihmo\bi E

Rate the severity of the potential effects of failure.

Rate the likelihood of occurrence for each potential failure mode.

Compare failure modes via a Criticality Matrix, which identifies severity on the horizontal axis and occurrence on the vertical axis.

Applications and BenefitsThe FMEA / FMECA analysis procedure is a tool that has been adapted in many different ways for many different purposes. It can contribute to improved designs for products and processes, resulting in higher reliability, better quality, increased safety, enhanced customer satisfaction and reduced costs. The tool

can also be used to establish and optimize maintenance plans for repairable systems and/or contribute to control plans and other quality assurance procedures. It provides a knowledge base of failure mode and corrective action information that can be used as a resource in future troubleshooting efforts and as a training tool for new engineers. In addition, an FMEA or FMECA is often required to comply with safety and quality requirements, such as ISO 9001,

QS 9000, ISO/TS 16949, Six Sigma, FDA Good Manufacturing Practices (GMPs), Process Safety Management Act (PSM), etc.