Quality Management Answers

57
FMEA A failure modes and effects analysis (FMEA) is a procedure in product development and operations management for analysis of potential failure modes within a system for classification by the severity and likelihood of the failures. A successful FMEA activity helps a team to identify potential failure modes based on past experience with similar products or processes, enabling the team to design those failures out of the system with the minimum of effort and resource expenditure, thereby reducing development time and costs. It is widely used in manufacturing industries in various phases of the product life cycle and is now increasingly finding use in the service industry. Failure modes are any errors or defects in a process, design, or item, especially those that affect the customer, and can be potential or actual. Effects analysis refers to studying the consequences of those failures. FMEA cycle. Failure The loss of an intended function of a device under stated conditions. Failure mode The manner by which a failure is observed; it generally describes the way the failure occurs. Failure effect Immediate consequences of a failure on operation, function or functionality, or status of some item Indenture levels An identifier for item complexity. Complexity increases as levels are closer to one. Local effect The failure effect as it applies to the item under analysis. Next higher level effect The failure effect as it applies at the next higher indenture level. End effect The failure effect at the highest indenture level or total system. Failure cause

Transcript of Quality Management Answers

Page 1: Quality Management Answers

FMEAA failure modes and effects analysis (FMEA) is a procedure in product development and operations management for analysis of potential failure modes within a system for classification by the severity and likelihood of the failures. A successful FMEA activity helps a team to identify potential failure modes based on past experience with similar products or processes, enabling the team to design those failures out of the system with the minimum of effort and resource expenditure, thereby reducing development time and costs. It is widely used in manufacturing industries in various phases of the product life cycle and is now increasingly finding use in the service industry. Failure modes are any errors or defects in a process, design, or item, especially those that affect the customer, and can be potential or actual. Effects analysis refers to studying the consequences of those failures.

FMEA cycle.Failure

The loss of an intended function of a device under stated conditions.Failure mode

The manner by which a failure is observed; it generally describes the way the failure occurs.Failure effect

Immediate consequences of a failure on operation, function or functionality, or status of some itemIndenture levels

An identifier for item complexity. Complexity increases as levels are closer to one.Local effect

The failure effect as it applies to the item under analysis.Next higher level effect

The failure effect as it applies at the next higher indenture level.End effect

The failure effect at the highest indenture level or total system.Failure cause

Defects in design, process, quality, or part application, which are the underlying cause of the failure or which initiate a process which leads to failure.

SeverityThe consequences of a failure mode. Severity considers the worst potential consequence of a failure, determined by the degree of injury, property damage, or system damage that could ultimately occur.

[edit]HistoryProcedures for conducting FMECA were described in US Armed Forces Military Procedures document MIL-P-1629[2] (1949; revised in 1980 as MIL-STD-1629A).[3] By the early 1960s, contractors for the U . S . National Aeronautics and Space Administration (NASA) were using variations of FMECA or FMEA under a variety of names.[4] [5] NASA programs using FMEA variants included Apollo, Viking,Voyager,

Page 2: Quality Management Answers

Magellan, Galileo, and Skylab.[6] [7] [8] The civil aviation industry was an early adopter of FMEA, with the Society for Automotive Engineers publishing ARP926 in 1967.[9]During the 1970's, use of FMEA and related techniques spread to other industries. In 1971 NASA prepared a report for the U . S . Geological Survey recommending the use of FMEA in assessment of offshore petroleum exploration.[10] FMEA as application for HACCP on the Apollo Space Program moved into the food industry in general.[11] In the late 1970s the Ford Motor Company introduced FMEA to the automotive industry for safety and regulatory consideration after the Pinto affair. They applied the same approach to processes (PFMEA) to consider potential process induced failures prior to launching production.Although initially developed by the military, FMEA methodology is now extensively used in a variety of industries including semiconductor processing, food service, plastics, software, and healthcare.[12] [13] It is integrated into the Automotive Industry Action Group 's (AIAG) Advanced Product Quality Planning (APQP) process to provide risk mitigation, in both product and process development phases. Each potential cause must be considered for its effect on the product or process and, based on the risk, actions are determined and risks revisited after actions are complete.Toyota has taken this one step further with its Design Review Based on Failure Mode (DRBFM) approach. The method is now supported by the American Society for Quality which provides detailed guides on applying the method.[14]

[edit]ImplementationIn FMEA, failures are prioritized according to how serious their consequences are, how frequently they occur and how easily they can be detected. A FMEA also documents current knowledge and actions about the risks of failures for use in continuous improvement. FMEA is used during the design stage with an aim to avoid future failures (sometimes called DFMEA in that case). Later it is used for process control, before and during ongoing operation of the process. Ideally, FMEA begins during the earliest conceptual stages of design and continues throughout the life of the product or service.The outcomes of an FMEA development are actions to prevent or reduce the severity or likelihood of failures, starting with the highest-priority ones. It may be used to evaluate risk management priorities for mitigating known threat vulnerabilities. FMEA helps select remedial actions that reduce cumulative impacts of life-cycle consequences (risks) from a systems failure (fault).It is used in many formal quality systems such as QS-9000 or ISO/TS 16949.

[edit]Using FMEA when designing

This article contains instructions , advice , or how - to content . The purpose of Wikipedia is to

present facts, not to train. Please help improve this article either by rewriting the how-to content or by

moving it to Wikiversity or Wikibooks. (August 2011)

FMEA is intended to provide an analytical approach to reviewing potential failure modes and their associated causes. FMEA is a recognised tool to help to assess which risks have the greatest concern, and therefore which risks to address in order to prevent problems before they arise. The development of these specifications helps to ensure the product will meet the defined requirements andcustomer needs .

[edit]The pre-workThe process for conducting an FMEA is typically developed in three main phases, in which appropriate actions need to be defined. Before starting with an FMEA, several other techniques are frequently employed to ensure that robustness and past history are included in the analysis.A robustness analysis can be obtained from interface matrices, boundary diagrams, and parameter diagrams. Failures are often found from external 'noise factors' and from shared interfaces with other parts and/or systems.

Page 3: Quality Management Answers

Typically, a description of the system and its function is developed, considering both intentional and unintentional uses.A block diagram of the system is often created for inclusion with the FMEA, giving an overview of the major components or process steps and how they are related. These are called logical relations around which the FMEA can be developed.The primary FME document or 'worksheet' lists all of the items or functions of the system in a logical manner, typically based on the block diagram.Item /

Function

Potential Failure mode

Potential Effects of

Failure

S (severity rating)

Potential Cause(s)

O (occurrence rating)

Current controls

D (detection

rating)

CRIT

(critical

characteristic

RPN (risk priority number)

Recommended action

s

Responsibility and

target

completion date

Action taken

New S

New O

New D

New RPN

Fill

tub

High level sensor never trips

Liquid spills on customer

floor

8 level sens

or faile

dlevel sens

or disconnected

2 Fill timeout based on time to fill to low level

sensor

5 N 80 Perform

cost analysis of addin

g additional

sensor

halfway

between low and high level

sensors

Jane Doe10-

June-

2011

NOTE: Above shown example format is not in line with mil.std 1629 or Civil Aerospace practise. The basic terms as given in first paragraph of this page are not available in this template!

Page 4: Quality Management Answers

[edit]Step 1: OccurrenceIn this step it is necessary to look at the cause of a failure mode and the number of times it occurs. This can be done by looking at similar products or processes and the failure modes that have been documented for them in the past. A failure cause is looked upon as a design weakness. All the potential causes for a failure mode should be identified and documented. Again this should be in technical terms. Examples of causes are: erroneous algorithms, excessive voltage or improper operating conditions. A failure mode is given an occurrence ranking (O), again 1–10. Actions need to be determined if the occurrence is high (meaning > 4 for non-safety failure modes and > 1 when the severity-number from step 1 is 1 or 0). This step is called the detailed development section of the FMEA process. Occurrence also can be defined as %. If a non-safety issue happened less than 1%, we can give 1 to it. It is based on your product and customer specification.

Rating Meaning

1 No known occurrences on similar products or processes

2/3 Low (relatively few failures)

4/5/6 Moderate (occasional failures)

7/8 High (repeated failures)

9/10 Very high (failure is almost inevitable)

[15]

[edit]Step 2: SeverityDetermine all failure modes based on the functional requirements and their effects. Examples of failure modes are: Electrical short-circuiting, corrosion or deformation. A failure mode in one component can lead to a failure mode in another component, therefore each failure mode should be listed in technical terms and for function. Hereafter the ultimate effect of each failure mode needs to be considered. A failure effect is defined as the result of a failure mode on the function of the system as perceived by the user. In this way it is convenient to write these effects down in terms of what the user might see or experience. Examples of failure effects are: degraded performance, noise or even injury to a user. Each effect is given a severity number (S) from 1 (no danger) to 10 (critical). These numbers help an engineer to prioritize the failure modes and their effects. If the sensitivity of an effect has a number 9 or 10, actions are considered to change the design by eliminating the failure mode, if possible, or protecting the user from the effect. A severity rating of 9 or 10 is generally reserved for those effects which would cause injury to a user or otherwise result in litigation.Ratin

gMeaning

1 No effect

2 Very minor (only noticed by discriminating customers)

3 Minor (affects very little of the system, noticed by average customer)

4/5/6 Moderate (most customers are annoyed)

7/8 High (causes a loss of primary function; customers are dissatisfied)

9/10 Very high and hazardous (product becomes inoperative; customers angered; the failure may result unsafe operation and possible injury)

[15]

[edit]Step 3: Detection

Page 5: Quality Management Answers

When appropriate actions are determined, it is necessary to test their efficiency. In addition, design verification is needed. The proper inspection methods need to be chosen. First, an engineer should look at the current controls of the system, that prevent failure modes from occurring or which detect the failure before it reaches the customer. Hereafter one should identify testing, analysis, monitoring and other techniques that can be or have been used on similar systems to detect failures. From these controls an engineer can learn how likely it is for a failure to be identified or detected. Each combination from the previous 2 steps receives a detection number (D). This ranks the ability of planned tests and inspections to remove defects or detect failure modes in time. The assigned detection number measures the risk that the failure will escape detection. A high detection number indicates that the chances are high that the failure will escape detection, or in other words, that the chances of detection are low.

Rating Meaning

1 Certain - fault will be caught on test

2 Almost Certain

3 High

4/5/6 Moderate

7/8 Low

9/10 Fault will be passed to customer undetected

[15]After these three basic steps, risk priority numbers (RPN) are calculated

[edit]Risk priority number (RPN)RPN play an important part in the choice of an action against failure modes. They are threshold values in the evaluation of these actions.After ranking the severity, occurrence and detectability the RPN can be easily calculated by multiplying these three numbers: RPN = S × O × DThis has to be done for the entire process and/or design. Once this is done it is easy to determine the areas of greatest concern. The failure modes that have the highest RPN should be given the highest priority for corrective action. This means it is not always the failure modes with the highest severity numbers that should be treated first. There could be less severe failures, but which occur more often and are less detectable.After these values are allocated, recommended actions with targets, responsibility and dates of implementation are noted. These actions can include specific inspection, testing or quality procedures, redesign (such as selection of new components), adding more redundancy and limiting environmental stresses or operating range. Once the actions have been implemented in the design/process, the new RPN should be checked, to confirm the improvements. These tests are often put in graphs, for easy visualization. Whenever a design or a process changes, an FMEA should be updated.A few logical but important thoughts come in mind:

● Try to eliminate the failure mode (some failures are more preventable than others)● Minimize the severity of the failure (severity of a failure cannot be changed)● Reduce the occurrence of the failure mode● Improve the detection

[edit]Timing of FMEAThe FMEA should be updated whenever:

● A new cycle begins (new product/process)● Changes are made to the operating conditions● A change is made in the design● New regulations are instituted

Page 6: Quality Management Answers

● Customer feedback indicates a problem

[edit]Uses of FMEA● Development of system requirements that minimize the likelihood of failures.● Development of methods to design and test systems to ensure that the failures have been

eliminated.● Evaluation of the requirements of the customer to ensure that those do not give rise to potential

failures.● Identification of certain design characteristics that contribute to failures, and minimize or eliminate

those effects.● Tracking and managing potential risks in the design. This helps avoid the same failures in future

projects.● Ensuring that any failure that could occur will not injure the customer or seriously impact a

system.● To produce world class quality products

[edit]Advantages● Improve the quality, reliability and safety of a product/process● Improve company image and competitiveness● Increase user satisfaction● Reduce system development timing and cost● Collect information to reduce future failures, capture engineering knowledge● Reduce the potential for warranty concerns● Early identification and elimination of potential failure modes● Emphasize problem prevention● Minimize late changes and associated cost● Catalyst for teamwork and idea exchange between functions● Reduce the possibility of same kind of failure in future● Reduce impact of profit margin company● Reduce possible scrap in production

[edit]LimitationsSince FMEA is effectively dependent on the members of the committee which examines product failures, it is limited by their experience of previous failures. If a failure mode cannot be identified, then external help is needed from consultants who are aware of the many different types of product failure. FMEA is thus part of a larger system of quality control , where documentation is vital to implementation. General texts and detailed publications are available in forensic engineering and failure analysis . It is a general requirement of many specific national and international standards that FMEA is used in evaluating product integrity. If used as a top - down tool, FMEA may only identify major failure modes in a system. Fault tree analysis (FTA) is better suited for "top-down" analysis. When used as a "bottom-up" tool FMEA can augment or complement FTA and identify many more causes and failure modes resulting in top-level symptoms. It is not able to discover complex failure modes involving multiple failures within a subsystem, or to report expected failure intervals of particular failure modes up to the upper level subsystem or system.[citation needed ]Additionally, the multiplication of the severity, occurrence and detection rankings may result in rank reversals, where a less serious failure mode receives a higher RPN than a more serious failure mode.[16] The reason for this is that the rankings are ordinal scale numbers, and multiplication is not defined for ordinal numbers. The ordinal rankings only say that one ranking is better or worse than another, but not by how much. For instance, a ranking of "2" may not be twice as severe as a ranking of "1," or an "8" may

Page 7: Quality Management Answers

not be twice as severe as a "4," but multiplication treats them as though they are. See Level of measurement for further discussion.

[edit]SoftwareMost FMEAs are created as a spreadsheet. Specialized FMEA software packages exist that offer some advantages over spreadsheets.

[edit]Types of FMEA● Process: analysis of manufacturing and assembly processes● Design: analysis of products prior to production● Concept: analysis of systems or subsystems in the early design concept stages● Equipment: analysis of machinery and equipment design before purchase● Service: analysis of service industry processes before they are released to impact the customer● System: analysis of the global system functions● Software: analysis of the software functions

TPMTotal productive maintenance (TPM) originated in Japan in 1971 as a method for improved machine availability through better utilization of maintenance and production resources.Whereas in most production settings the operator is not viewed as a member of the maintenance team, in TPM the machine operator is trained to perform many of the day-to-day tasks of simple maintenance and fault-finding. Teams are created that include a technical expert (often an engineer or maintenance technician) as well as operators. In this setting the operators are enabled to understand the machinery and identify potential problems, righting them before they can impact production and by so doing, decrease downtime and reduce costs of production.TPM is a critical adjunct to lean manufacturing . If machine uptime is not predictable and if process capability is not sustained, the process must keep extra stocks to buffer against this uncertainty and flow through the process will be interrupted. Unreliable uptime is caused by breakdowns or badly performed maintenance. Correct maintenance will allow uptime to improve and speed production through a given area allowing a machine to run at its designed capacity of production.One way to think of TPM is "deterioration prevention": deterioration is what happens naturally to anything that is not "taken care of". For this reason many people[who ? ] refer to TPM as "total productive manufacturing" or "total process management". TPM is a proactive approach that essentially aims to identify issues as soon as possible and plan to prevent any issues before occurrence. One motto is "zero error, zero work-related accident, and zero loss".

Contents

[hide]

● 1 Introduction ● 2 History ● 3 Implementation ● 4 See also ● 5 References ● 6 Further reading

[edit]IntroductionTPM is a maintenance process developed for improving productivity by making processes more reliable and less wasteful.TPM is an extension of TQM(Total Quality Management). The objective of TPM is to maintain the plant or equipment in good condition without interfering the daily process. To achieve this

Page 8: Quality Management Answers

objective, preventive and predictive maintenance is required. By following the philosophy of TPM we can minimize the unexpected failure of the equipment.To implement TPM the production unit and maintenance unit should work jointly.Original goal of total productive management:“Continuously improve all operational conditions, within a production system; by stimulating the daily awareness of all employees” (by Seiichi Nakajima, Japan, JIPM)TPM focuses primarily on manufacturing (although its benefits are applicable to virtually any "process") and is the first methodology Toyota used to improve its global position (1950s). After TPM, the focus was stretched, and also suppliers and customers were involved (Supply Chain), this next methodology was called lean manufacturing . This sheet gives an overview of TPM in its original form.An accurate and practical implementation of TPM, will increase productivity within the total organization, where:(1) .. a clear business culture is designed to continuously improve the efficiency of the total production system(2) .. a standardized and systematic approach is used, where all losses are prevented and/or known.(3) .. all departments, influencing productivity, will be involved to move from a reactive- to a predictive mindset.(4) .. a transparent multidisciplinary organization is reaching zero losses.(5) .. steps are taken as a journey, not as a quick menu.Finally TPM will provide practical and transparent ingredients to reach operational excellence .

[edit]HistoryTPM is an evolving process, starting from a Japanese idea that can be traced back to 1951, when preventive maintenance was introduced into Japan from the USA (Deming). Nippondenso, part of Toyota, was the first company in Japan to introduce plant wide preventive maintenance in 1960. In preventive maintenance operators produced goods using machines and the maintenance group was dedicated to the work of maintaining those machines. However with the high level of automation of Nippondenso, maintenance became a problem as so many more maintenance personnel were now required. So the management decided that much of the routine maintenance of equipment would now be carried out by the operators themselves. (Autonomous Maintenance, one of the features of TPM is more cost effective to use as the operator (compared to a highly skilled engineer) is on a lower pay rate. This is not to reduce costs, however; the operator has a better understanding of the how the equipment works on a daily basis, can tell if an issue is appearing, can tell if quality is decreasing, and, through constant learning, is allowed to follow a career path to a better job. The maintenance group then focused only on more complex problems and project work for long term upgrades.The maintenance group performed equipment modification that would improve its reliability. These modifications were then made or incorporated into new equipment. The work of the maintenance group, with the support and input from operators and production engineers, is then to make changes that lead to maintenance prevention and increased quality through fewer defects and a reduction in scrap levels. Thus preventive maintenance along with maintenance prevention and maintainability improvement were grouped as productive maintenance. The aim of productive maintenance was to maximize plant and equipment effectiveness to achieve the optimum life cycle cost of production equipment.Nippondenso already had quality circles which involved the employees in changes. Therefore, now, all employees took part in implementing Productive maintenance. Based on these developments Nippondenso was awarded the distinguished plant prize for developing and implementing TPM, by the Japanese Institute of Plant Engineers (JIPE). Thus Nippondenso of the Toyota group became the first company to obtain the TPM certifications.

[edit]ImplementationTPM has basically 3 goals - Zero Product Defects, Zero Equipment Unplanned Failures and Zero Accidents. It sets out to achieve these goals by Gap Analysis of previous historical records of Product Defects, Equipment Failures and Accidents. Then through a clear understanding of this Gap Analysis (Fishbone Cause-Effect Analysis, Why-Why Cause-Effect Analysis, and P-M Analysis) plan a physical

Page 9: Quality Management Answers

investigation to discover new latent fuguai (slight deterioration) during the first step in TPM Autonomous Maintenance called "Initial Cleaning".Many companies struggle to implement TPM due to 2 main reasons. First is having insufficient knowledge and skills especially in understanding the linkages between the 8 Pillar-Activities in TPM[1]. It does not help in that most TPM books are long on the theories but scanty on the implementation details. The second reason is that TPM requires more time, resources and efforts than most of these companies believe they can afford. A typical TPM implementation requires company-wide participation and full results can only be seen after 3 years and sometimes 5 years. The main reason for this long duration is due to the basic involvement and training required for Autonomous Maintenance participation where operators participate in the restoring the equipment to its original capability and condition and then improving the equipment.An effective Fast-Track TPM Implementation Approach has been successful in a Paper Mill and Electronics Industries and documented. It circumvented this problem by assigning Project Teams to do Autonomous Maintenance for the AM Steps of 1) Initial Cleaning and 2) Eliminating Sources of Contamination and Improving Equipment Accessibility. Production Operators take over the Autonomous Maintenance after the AM Step 3 (Initial Maintenance Standards) has been established. This has been proven to reduce TPM implementation time by about 50%.TPM identifies the 7 losses (types of waste) (muda), namely set-up and initial adjustment time, equipment breakdown time, idling and minor losses, speed (cycle time) losses, start-up quality losses, and in process quality losses, and then works systematically to eliminate them by making improvements (kaizen). TPM has 8 pillars of activity[1], each being set to achieve a “zero” target. These 8 pillars are the following: focussed improvement (Kobetsu Kaizen); autonomous maintenance (Jishu Hozen); planned maintenance; training and education; early-phase management; quality maintenance (Hinshitsu Hozen); office TPM; and safety, health, and environment. Few organisation also add Pillars according to their Work Place like: Tools Management; Information Technology & more. The Base for the TPM Activity is 5S; Seiri (Sorting out the required or not required items); Seition (Systematic Arrangement of the required items); Seiso (Cleaniness); Seiketsu (Standardisation); Shitsuke (Self Discipline).The Pillars & their details a) Efficient Equipment Utilisation b) Efficient Worker Utilisation c) Efficient Material & Energy Utilisation. 1) focussed improvement (Kobetsu Kaizen) - Continuously even small steps of improvement.2) Planned Maintenance - It focusses on Increasing Availability of Equipments & reducing Breakdown of Machines.3) Initial Control - To establish the system to launch the production of new product & new equipment in a minimum run up time.4) Education & Training - Formation of Autonomous workers who have skill & technique for autonomous maintenance.5) Autonomous Maintenance (Jishu Hozen) - It means "Maintaining one's equipment by oneself". There are 7 Steps in & Activities of Jishu Hozen.6) Quality Maintenance (Hinshitsu Hozen) - Quality Maintenance is establishment of machine conditions that will not allow the occurrence of defects & control of such conditions is requored to sustain Zero Defect.7) Office TPM - To make an efficient working office that eliminate losses.8) Safety, Hygiene & Environment - The main role of SHE (Safety, Hygiene & Environment) is to create Safe & healthy work place where accidents do not occur, uncover & improve hazardous areas & do activities that preserve environment.Other Pillars Like: Tools Management - To increase the availability of Equipment by reducing Tool Resetting Time, To reduce Tool Consumption Cost & to increase the tool life.TPM success measurement - A set of performance metrics which is considered to fit well in a lean manufacturing/TPM environment is overall equipment effectiveness , or OEE. For advanced TPM world class practitioners, the OEE cannot be converted to costs using Target Costing Management (TCM) OEE measurements are used as a guide to the potential improvement that can be made to an equipment. and by identifying which of the 6 losses is the greater, then the techniques applicable to that type of loss. Consistent application of the applicable improvement techniques to the sources of major losses will positively impact the performance of that equipment.Using a criticality analysis across the factory should identify which equipments should be improved first, also to gain the quickest overall factory performance.

Page 10: Quality Management Answers

The use of Cost Deployment is quite rare, but can be very useful in identifying the priority for selective TPM deployment. ..Steps to start TPM are Identifying the key people

● Management should learn the philosophy.● Management must promote the philosophy.● Training for all the employees.● Identify the areas where improvement are needed.● Make an implementation plan.● Form an autonomous group.

SPCStatistical process control (SPC) is the application of statistical methods to the monitoring and control of a process to ensure that it operates at its full potential to produce conforming product. Under SPC, a process behaves predictably to produce as much conforming product as possible with the least possible waste. While SPC has been applied most frequently to controlling manufacturing lines, it applies equally well to any process with a measurable output. Key tools in SPC are control charts , a focus on continuous improvement and designed experiments .Much of the power of SPC lies in the ability to examine a process and the sources of variation in that process using tools that give weight to objective analysis over subjective opinions and that allow the strength of each source to be determined numerically. Variations in the process that may affect the quality of the end product or service can be detected and corrected, thus reducing waste as well as the likelihood that problems will be passed on to the customer. With its emphasis on early detection and prevention of problems, SPC has a distinct advantage over other quality methods, such as inspection, that apply resources to detecting and correcting problems after they have occurred.In addition to reducing waste, SPC can lead to a reduction in the time required to produce the product or service from end to end. This is partially due to a diminished likelihood that the final product will have to be reworked, but it may also result from using SPC data to identify bottlenecks, wait times, and other sources of delays within the process. Process cycle time reductions coupled with improvements in yield have made SPC a valuable tool from both a cost reduction and a customer satisfaction standpoint.

Contents

[hide]

● 1 History ● 2 General ● 3 How to Use SPC ● 4 See also ● 5 References ● 6 Bibliography ● 7 External links

[edit]HistoryStatistical process control was pioneered by Walter A . Shewhart in the early 1920s. W . Edwards Deming later applied SPC methods in the United States during World War II , thereby successfully improving quality in the manufacture of munitions and other strategically important products. Deming was also instrumental in introducing SPC methods to Japanese industry after the war had ended.[1] [2] Shewhart created the basis for the control chart and the concept of a state of statistical control by carefully designed experiments. While Dr. Shewhart drew from pure mathematical statistical theories, he understood that data from physical processes seldom produces a "normal distribution curve" (a Gaussian distribution, also commonly referred to as a "bell curve "). He discovered that observed variation in manufacturing data did not always behave the same way as data in nature (for example, Brownian motion

Page 11: Quality Management Answers

of particles). Dr. Shewhart concluded that while every process displays variation, some processes display controlled variation that is natural to the process (common causes of variation), while others display uncontrolled variation that is not present in the process causal system at all times (special causes of variation).[3]In 1988, the Software Engineering Institute introduced the notion that SPC can be usefully applied to non-manufacturing processes, such as software engineering processes, in the Capability Maturity Model (CMM). This idea exists today within the Level 4 and Level 5 practices of the Capability Maturity Model Integration (CMMI). This notion that SPC is a useful tool when applied to non-repetitive, knowledge-intensive processes such as engineering processes has encountered much skepticism, and remains controversial today.[4] [5]

[edit]GeneralIn mass-manufacturing, the quality of the finished article was traditionally achieved through post-manufacturing inspection of the product; accepting or rejecting each article (or samples from a production lot) based on how well it met its design specifications. In contrast, Statistical Process Control uses statistical tools to observe the performance of the production process in order to predict significant deviations that may later result in rejected product.A main concept is that, for any measurable process characteristic, the notion that causes of variation can be separated into two distinct classes: 1) Normal (sometimes also referred to as common or chance) causes of variation and 2) assignable (sometimes also referred to as special) causes of variation. The idea is that most processes have many causes of variation, most of them are minor, can be ignored, and if we can only identify the few dominant causes, then we can focus our resources on those. SPC allows us to detect when the few dominant causes of variation are present. If the dominant (assignable) causes of variation can be detected, potentially they can be identified and removed. Once removed, the process is said to be stable, which means that its resulting variation can be expected to stay within a known set of limits, at least until another assignable cause of variation is introduced.For example, a breakfast cereal packaging line may be designed to fill each cereal box with 500 grams of product, but some boxes will have slightly more than 500 grams, and some will have slightly less, in accordance with a distribution of net weights. If the production process, its inputs, or its environment changes (for example, the machines doing the manufacture begin to wear) this distribution can change. For example, as its cams and pulleys wear out, the cereal filling machine may start putting more cereal into each box than specified. If this change is allowed to continue unchecked, more and more product will be produced that fall outside the tolerances of the manufacturer or consumer, resulting in waste. While in this case, the waste is in the form of "free" product for the consumer, typically waste consists of rework or scrap.By observing at the right time what happened in the process that led to a change, the quality engineer or any member of the team responsible for the production line can troubleshoot the root cause of the variation that has crept in to the process and correct the problem.

[edit]How to Use SPC

This section does not cite any references or sources. Please help improve this section by

adding citations to reliable sources . Unsourced material may be challenged and removed. (April

2011)

Statistical Process Control may be broadly broken down into three sets of activities: understanding the process, understanding the causes of variation, and elimination of the sources of special cause variation.In understanding a process, the process is typically mapped out and the process is monitored using control charts . Control charts are used to identify variation that may be due to special causes, and to free the user from concern over variation due to common causes. This is a continuous, ongoing activity. When a process is stable and does not trigger any of the detection rules for a control chart, aprocess capability

Page 12: Quality Management Answers

analysis may also be performed to predict the ability of the current process to produce conforming (i.e. within specification) product in the future.When excessive variation is identified by the control chart detection rules, or the process capability is found lacking, additional effort is exerted to determine causes of that variance. The tools used include Ishikawa diagrams , designed experiments and Pareto charts . Designed experiments are critical to this phase of SPC, as they are the only means of objectively quantifying the relative importance of the many potential causes of variation.Once the causes of variation have been quantified, effort is spent in eliminating those causes that are both statistically and practically significant (i.e. a cause that has only a small but statistically significant effect may not be considered cost-effective to fix; however, a cause that is not statistically significant can never be considered practically significant). Generally, this includes development of standard work, error-proofing and training. Additional process changes may be required to reduce variation or align the process with the desired target, especially if there is a problem with process capability.For digital SPC charts, so-called SPC rules usually come with some rule specific logic that determines a 'derived value' that is to be used as the basis for some (setting) correction. One example of such a derived value would be (for the common N numbers in a row ranging up or down 'rule'); derived value = last value + average difference between the last N numbers (which would, in effect, be extending the row with the to be expected next value).Most SPC charts work best for numeric data with Gaussian assumptions. Recently a new control chart: The real-time contrasts chart[6] was proposed to handle process data with complex characteristics, e.g. high-dimensional, mix numerical and categorical, missing-valued, non-Gaussian, non-linear relationship.

Seven Basic Tools of QualityFrom Wikipedia, the free encyclopediaThe Seven Basic Tools of Quality is a designation given to a fixed set of graphical techniques identified as being most helpful in troubleshooting issues related to quality.[1] They are called basic because they are suitable for people with little formal training in statistics and because they can be used to solve the vast majority of quality-related issues.[2]:198The tools are:[3]

● The cause-and-effect or Ishikawa diagram ● The check sheet ● The control chart ● The histogram● The Pareto chart ● The scatter diagram ● Stratification (alternately flow chart or run chart )

The designation arose in postwar Japan, inspired by the seven famous weapons of Benkei.[4] At that time, companies that had set about training their workforces in statistical quality control found that the complexity of the subject intimidated the vast majority of their workers and scaled back training to focus primarily on simpler methods which suffice for most quality-related issues anyway.[2]:18The Seven Basic Tools stand in contrast with more advanced statistical methods such as survey sampling , acceptance sampling , statistical hypothesis testing , design of experiments ,multivariate analysis , and various methods developed in the field of operations research .[2]:199

Ishikawa diagram

Page 13: Quality Management Answers

From Wikipedia, the free encyclopedia

Ishikawa diagram

One of the Seven Basic Tools of Quality

First described by Kaoru Ishikawa

Purpose To break down (in successive layers of detail) root causes that potentially contribute to a particular

effect

Ishikawa diagrams (also called fishbone diagrams, or herringbone diagrams , cause-and-effect diagrams, or Fishikawa) arecausal diagrams that show the causes of a specific event -- created by Kaoru Ishikawa (1990).[1] Common uses of the Ishikawa diagram are product design and quality defect prevention, to identify potential factors causing an overall effect. Each cause or reason for imperfection is a source of variation. Causes are usually grouped into major categories to identify these sources of variation. The categories typically include:

● People: Anyone involved with the process● Methods: How the process is performed and the specific requirements for doing it, such as policies,

procedures, rules, regulations and laws● Machines: Any equipment, computers, tools etc. required to accomplish the job● Materials: Raw materials, parts, pens, paper, etc. used to produce the final product● Measurements: Data generated from the process that are used to evaluate its quality● Environment: The conditions, such as location, time, temperature, and culture in which the process operates

Contents

[hide]

● 1 Overview ● 2 Causes

○ 2.1 The 6 Ms ( used in manufacturing ) ○ 2.2 The 8 Ps ( used in service industry ) ○ 2.3 The 5 Ss ( used in service industry )

● 3 Questions to be asked while building a Fishbone Diagram ● 4 Criticism ● 5 See also ● 6 References

Page 14: Quality Management Answers

○ 6.1 Further reading ● 7 External links

[edit]Overview

Ishikawa diagram, in fishbone shape, showing factors of Equipment, Process, People, Materials, Environment and Management, all affecting the overall problem. Smaller arrows connect the sub-causes to major causes.Ishikawa diagrams were proposed by Kaoru Ishikawa [2] in the 1960s, who pioneered quality management processes in the Kawasakishipyards, and in the process became one of the founding fathers of modern management.It was first used in the 1940s, and is considered one of the seven basic tools of quality control .[3] It is known as a fishbone diagram because of its shape, similar to the side view of a fish skeleton.Mazda Motors famously used an Ishikawa diagram in the development of the Miata sports car, where the required result was "Jinba Ittai" (Horse and Rider as One — jap. 人馬一体). The main causes included such aspects as "touch" and "braking" with the lesser causes including highly granular factors such as "50/50 weight distribution" and "able to rest elbow on top of driver's door". Every factor identified in the diagram was included in the final design.

[edit]CausesCauses in the diagram are often categorized, such as to the 8 M's, described below. Cause-and-effect diagrams can reveal key relationships among various variables, and the possible causes provide additional insight into process behavior.Causes can be derived from brainstorming sessions. These groups can then be labeled as categories of the fishbone. They will typically be one of the traditional categories mentioned above but may be something unique to the application in a specific case. Causes can be traced back to root causes with the 5 Whys technique.Typical categories are:

[edit]The 6 Ms (used in manufacturing)● Machine (technology)● Method (process)● Material (Includes Raw Material, Consumables and Information.)● Man Power (physical work)/Mind Power (brain work): Kaizens, Suggestions● Measurement (Inspection)● Milieu/Mother Nature (Environment)

The original 6Ms used by the Toyoda Production System have been expanded by some to included the following and are referred to as the 8Ms. However, this is not Globally recognized. It has been suggested to return to the roots of the tools and to keep the teaching simple while recognizing the original intent, most programs do not address the 8Ms.

● Management/Money Power● Maintenance

Page 15: Quality Management Answers

[edit]The 8 Ps (used in service industry)● Product=Service● Price● Place● Promotion/Entertainment● People(key person)● Process● Physical Evidence● Productivity & Quality

[edit]The 5 Ss (used in service industry)● Surroundings● Suppliers● Systems● Skills● Safety

[edit]Questions to be asked while building a Fishbone DiagramMan/Operator – Was the document properly interpreted? – Was the information properly circulated to all the functions? – Did the recipient understand the information? – Was the proper training to perform the task administered to the person? – Was too much judgment required to perform the task? – Were guidelines for judgment available? – Did the environment influence the actions of the individual? – Are there distractions in the workplace? – Is fatigue a mitigating factor? - Is his work efficiency acceptable? - Is he responsible/accountable? - Is he qualified? - Is he experienced? - Is he medically fit and healthy? – How much experience does the individual have in performing this task? - can he carry out the operation without error?Machines – Was the correct tool/tooling used? - Does it meet production requirements? - Does it meet process capabilities? – Are files saved with the correct extension to the correct location? – Is the equipment affected by the environment? – Is the equipment being properly maintained (i.e., daily/weekly/monthly preventative maintenance schedule) – Does the software or hardware need to be updated? – Does the equipment or software have the features to support our needs/usage? - Was the machine properly maintained? – Was the machine properly programmed? – Is the tooling/fixturing adequate for the job? – Does the machine have an adequate guard? – Was the equipment used within its capabilities and limitations? – Are all controls including emergency stop button clearly labeled and/or color coded or size differentiated? – Is the equipment the right application for the given job?Measurement – Does the gauge have a valid calibration date? – Was the proper gauge used to measure the part, process, chemical, compound, etc.? – Was a gauge capability study ever performed? - Do measurements vary significantly from operator to operator? - Do operators have a tough time using the prescribed gauge? - Is the gauge fixturing adequate? – Does the gauge have proper measurement resolution? – Did the environment influence the measurements taken?Material (Includes Raw Material, Consumables and Information ) – Is all needed information available and accurate? – Can information be verified or cross-checked? – Has any information changed recently / do we have a way of keeping the information up to date? – What happens if we don't have all of the information we need? – Is a Material Safety Data Sheet (MSDS) readily available? – Was the material properly tested? – Was the material substituted? – Is the supplier’s process defined and controlled? - Was the raw material defective? - was the raw material the wrong type for the job? – Were quality requirements adequate for the part's function? – Was the material contaminated? – Was the material handled properly (stored, dispensed, used & disposed)?Method – Was the canister, barrel, etc. labeled properly? – Were the workers trained properly in the procedure? – Was the testing performed statistically significant? – Was data tested for true root cause? – How many “if necessary” and “approximately” phrases are found in this process? – Was this a process generated by an Integrated Product Development (IPD) Team? – Did the IPD Team employ Design for Environmental (DFE) principles? – Has a capability study ever been performed for this process? – Is the process under Statistical Process Control (SPC)? – Are the work instructions clearly written? – Are mistake-proofing devices/techniques employed? – Are the work instructions complete? - Is the work standard upgraded and to current revision? – Is the tooling adequately designed and controlled? – Is handling/packaging adequately specified? – Was the process changed? – Was the design changed? - Are the lighting and ventilation adequate? – Was a process Failure Modes Effects Analysis (FMEA) ever performed? – Was adequate sampling done? – Are features of the process critical to safety clearly spelled out to the Operator?

Page 16: Quality Management Answers

Environment – Is the process affected by temperature changes over the course of a day? – Is the process affected by humidity, vibration, noise, lighting, etc.? – Does the process run in a controlled environment? – Are associates distracted by noise, uncomfortable temperatures, fluorescent lighting, etc.?Management - Is management involvement seen? – Inattention to task – Task hazards not guarded properly – Other (horseplay, inattention....) – Stress demands – Lack of Process – Training or education lacking – Poor employee involvement – Poor recognition of hazard – Previously identified hazards were not eliminated

[edit]CriticismIn a discussion of the nature of a cause it is customary to distinguish between necessary and sufficient conditions for the occurrence of an event. A necessary condition for the occurrence of a specified event is a circumstance in whose absence the event cannot occur. A sufficient condition for the occurrence of an event is a circumstance in whose presence the event must occur.[4] A sufficient condition naturally contains one or several necessary ones. Ishikawa diagrams are meant to use the necessary conditions and split the "sufficient" ones into the "necessary" parts. Some critics failing this simple logic have asked which conditions (necessary or sufficient) are addressed by the diagram in case[5]

Check sheet From Wikipedia , the free encyclopedia

This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources . Unsourced

material may be challenged and removed. (August 2008)

Check sheet

One of the Seven Basic Tools of Quality

Purpose To provide a structured way to collect quality-related data as a rough means for assessing a process or

as an input to other analyses

The check sheet is a simple document that is used for collecting data in real-time and at the location where the data is generated. The document is typically a blank form that is designed for the quick, easy, and efficient recording of the

Page 17: Quality Management Answers

desired information, which can be either quantitative or qualitative. When the information is quantitative, the checksheet is sometimes called a tally sheet [1] .A defining characteristic of a checksheet is that data is recorded by making marks ("checks") on it. A typical checksheet is divided into regions, and marks made in different regions have different significance. Data is read by observing the location and number of marks on the sheet. 5 Basic types of Check Sheets:

● Classification: A trait such as a defect or failure mode must be classified into a category.● Location: The physical location of a trait is indicated on a picture of a part or item being evaluated.● Frequency: The presence or absence of a trait or combination of traits is indicated. Also number of

occurrences of a trait on a part can be indicated.● Measurement Scale: A measurement scale is divided into intervals, and measurements are indicated by

checking an appropriate interval.● Check List: The items to be performed for a task are listed so that, as each is accomplished, it can be

indicated as having been completed.

An example of a simple quality control checksheetThe check sheet is one of the seven basic tools of quality control .

Control chartFrom Wikipedia, the free encyclopedia

Control chart

Page 18: Quality Management Answers

One of the Seven Basic Tools of Quality

First described by Walter A . Shewhart

Purpose To determine whether a process should undergo a formal examination for quality-related problems

Control charts, also known as Shewhart charts or process-behaviour charts, in statistical process control are tools used to determine whether a manufacturing or business process is in a state of statistical control .

Contents

[hide]

● 1 Overview ● 2 History ● 3 Chart details

○ 3.1 Chart usage ○ 3.2 Choice of limits ○ 3.3 Calculation of standard deviation

● 4 Rules for detecting signals ● 5 Alternative bases ● 6 Performance of control charts ● 7 Criticisms ● 8 Types of charts ● 9 See also ● 10 Notes ● 11 Bibliography ● 12 External links

[edit]Overview

Page 19: Quality Management Answers

If analysis of the control chart indicates that the process is currently under control (i.e. is stable, with variation only coming from sources common to the process) then no corrections or changes to process control parameters are needed or desirable. In addition, data from the process can be used to predict the future performance of the process. If the chart indicates that the process being monitored is not in control, analysis of the chart can help determine the sources of variation, which can then be eliminated to bring the process back into control. A control chart is a specific kind of run chart that allows significant change to be differentiated from the natural variability of the process.The control chart can be seen as part of an objective and disciplined approach that enables correct decisions regarding control of the process, including whether to change process control parameters. Process parameters should never be adjusted for a process that is in control, as this will result in degraded process performance.[1] A process that is stable but operating outside of desired limits (e.g. scrap rates may be in statistical control but above desired limits) needs to be improved through a deliberate effort to understand the causes of current performance and fundamentally improve the process.[2]The control chart is one of the seven basic tools of quality control .[3]

[edit]HistoryThe control chart was invented by Walter A . Shewhart while working for Bell Labs in the 1920s. The company's engineers had been seeking to improve the reliability of their telephonytransmission systems. Because amplifiers and other equipment had to be buried underground, there was a business need to reduce the frequency of failures and repairs. By 1920 the engineers had already realized the importance of reducing variation in a manufacturing process. Moreover, they had realized that continual process-adjustment in reaction to non-conformance actually increased variation and degraded quality. Shewhart framed the problem in terms of Common - and special - causes of variation and, on May 16, 1924, wrote an internal memo introducing the control chart as a tool for distinguishing between the two. Dr. Shewhart's boss, George Edwards, recalled: "Dr. Shewhart prepared a little memorandum only about a page in length. About a third of that page was given over to a simple diagram which we would all recognize today as a schematic control chart. That diagram, and the short text which preceded and followed it, set forth all of the essential principles and considerations which are involved in what we know today as process quality control."[4] Shewhart stressed that bringing a production process into a state of statistical control , where there is only common - cause variation, and keeping it in control, is necessary to predict future output and to manage a process economically.Dr. Shewhart created the basis for the control chart and the concept of a state of statistical control by carefully designed experiments. While Dr. Shewhart drew from pure mathematical statistical theories, he understood data from physical processes typically produce a "normal distribution curve" (a Gaussian distribution , also commonly referred to as a "bell curve "). He discovered that observed variation in manufacturing data did not always behave the same way as data in nature (Brownian motion of particles). Dr. Shewhart concluded that while every process displays variation, some processes display controlled variation that is natural to the process, while others display uncontrolled variation that is not present in the process causal system at all times.[5]In 1924 or 1925, Shewhart's innovation came to the attention of W . Edwards Deming , then working at the Hawthorne facility. Deming later worked at the United States Department of Agriculture and then became the mathematical advisor to the United States Census Bureau . Over the next half a century, Deming became the foremost champion and proponent of Shewhart's work. After the defeat of Japan at the close of World War II , Deming served as statistical consultant to the Supreme Commander of the Allied Powers . His ensuing involvement in Japanese life, and long career as an industrial consultant there, spread Shewhart's thinking, and the use of the control chart, widely in Japanese manufacturing industry throughout the 1950s and 1960s.

[edit]Chart detailsA control chart consists of:

● Points representing a statistic (e.g., a mean, range, proportion) of measurements of a quality characteristic in samples taken from the process at different times [the data]

● The mean of this statistic using all the samples is calculated (e.g., the mean of the means, mean of the ranges, mean of the proportions)

● A center line is drawn at the value of the mean of the statistic● The standard error (e.g., standard deviation/sqrt(n) for the mean) of the statistic is also calculated using all

the samples● Upper and lower control limits (sometimes called "natural process limits") that indicate the threshold at which

the process output is considered statistically 'unlikely' are drawn typically at 3 standard errors from the center line

The chart may have other optional features, including:● Upper and lower warning limits, drawn as separate lines, typically two standard errors above and below the

center line● Division into zones, with the addition of rules governing frequencies of observations in each zone

Page 20: Quality Management Answers

● Annotation with events of interest, as determined by the Quality Engineer in charge of the process's quality

[edit]Chart usageIf the process is in control (and the process statistic is normal), 99.7300% of all the points will fall between the control limits. Any observations outside the limits, or systematic patterns within, suggest the introduction of a new (and likely unanticipated) source of variation, known as a special - cause variation. Since increased variation means increased quality costs , a control chart "signaling" the presence of a special-cause requires immediate investigation.This makes the control limits very important decision aids. The control limits tell you about process behavior and have no intrinsic relationship to any specification targets or engineering tolerance . In practice, the process mean (and hence the center line) may not coincide with the specified value (or target) of the quality characteristic because the process' design simply cannot deliver the process characteristic at the desired level.Control charts limit specification limits or targets because of the tendency of those involved with the process (e.g., machine operators) to focus on performing to specification when in fact the least-cost course of action is to keep process variation as low as possible. Attempting to make a process whose natural center is not the same as the target perform to target specification increases process variability and increases costs significantly and is the cause of much inefficiency in operations. Process capability studies do examine the relationship between the natural process limits (the control limits) and specifications, however.The purpose of control charts is to allow simple detection of events that are indicative of actual process change. This simple decision can be difficult where the process characteristic is continuously varying; the control chart provides statistically objective criteria of change. When change is detected and considered good its cause should be identified and possibly become the new way of working, where the change is bad then its cause should be identified and eliminated.The purpose in adding warning limits or subdividing the control chart into zones is to provide early notification if something is amiss. Instead of immediately launching a process improvement effort to determine whether special causes are present, the Quality Engineer may temporarily increase the rate at which samples are taken from the process output until it's clear that the process is truly in control. Note that with three-sigma limits, common - cause variations result in signals less than once out of every twenty-two points for skewed processes and about once out of every three hundred seventy (1/370.4) points for normally-distributed processes.[6] The two-sigma warning levels will be reached about once for every twenty-two (1/21.98) plotted points in normally-distributed data. (For example, the means of sufficiently large samples drawn from practically any underlying distribution whose variance exists are normally distributed, according to the Central Limit Theorem.)

[edit]Choice of limitsShewhart set 3-sigma (3-standard error) limits on the following basis.

● The coarse result of Chebyshev ' s inequality that, for any probability distribution , the probability of an outcome greater than k standard deviations from the mean is at most 1/k2.

Page 21: Quality Management Answers

● The finer result of the Vysochanskii - Petunin inequality , that for any unimodal probability distribution , the probability of an outcome greater than k standard deviations from the mean is at most 4/(9k2).

● The empirical investigation of sundry probability distributions reveals that at least 99% of observations occurred within three standard deviations of the mean.

Shewhart summarized the conclusions by saying:... the fact that the criterion which we happen to use has a fine ancestry in highbrow statistical theorems does not justify its use. Such justification must come from empirical evidence that it works. As the practical engineer might say, the proof of the pudding is in the eating.[citation needed ]Though he initially experimented with limits based on probability distributions , Shewhart ultimately wrote:Some of the earliest attempts to characterize a state of statistical control were inspired by the belief that there existed a special form of frequency function f and it was early argued that the normal law characterized such a state. When the normal law was found to be inadequate, then generalized functional forms were tried. Today, however, all hopes of finding a unique functional form f are blasted.[citation needed ]The control chart is intended as a heuristic. Deming insisted that it is not a hypothesis test and is not motivated by the Neyman - Pearson lemma . He contended that the disjoint nature ofpopulation and sampling frame in most industrial situations compromised the use of conventional statistical techniques. Deming's intention was to seek insights into the cause system of a process ...under a wide range of unknowable circumstances, future and past....[citation needed] He claimed that, under such conditions, 3-sigma limits provided ... a rational and economic guide to minimum economic loss... from the two errors:[citation needed ]

1. Ascribe a variation or a mistake to a special cause (assignable cause) when in fact the cause belongs to the system (common cause). (Also known as a Type I error )

2. Ascribe a variation or a mistake to the system (common causes) when in fact the cause was a special cause (assignable cause). (Also known as a Type II error )

[edit]Calculation of standard deviationAs for the calculation of control limits, the standard deviation (error) required is that of the common - cause variation in the process. Hence, the usual estimator, in terms of sample variance, is not used as this estimates the total squared-error loss from both common - and special - causes of variation.An alternative method is to use the relationship between the range of a sample and its standard deviation derived by Leonard H . C . Tippett , an estimator which tends to be less influenced by the extreme observations which typify special - causes .

[edit]Rules for detecting signalsThe most common sets are:

● The Western Electric rules ● The Wheeler rules (equivalent to the Western Electric zone tests[7])● The Nelson rules

There has been particular controversy as to how long a run of observations, all on the same side of the centre line, should count as a signal, with 6, 7, 8 and 9 all being advocated by various writers.The most important principle for choosing a set of rules is that the choice be made before the data is inspected. Choosing rules once the data have been seen tends to increase the Type I error rate owing to testing effects suggested by the data .

[edit]Alternative basesIn 1935, the British Standards Institution , under the influence of Egon Pearson and against Shewhart's spirit, adopted control charts, replacing 3-sigma limits with limits based onpercentiles of the normal distribution . This move continues to be represented by John Oakland and others but has been widely deprecated by writers in the Shewhart-Deming tradition.

[edit]Performance of control chartsWhen a point falls outside of the limits established for a given control chart, those responsible for the underlying process are expected to determine whether a special cause has occurred. If one has, it is appropriate to determine if the results with the special cause are better than or worse than results from common causes alone. If worse, then that cause should be eliminated if possible. If better, it may be appropriate to intentionally retain the special cause within the system producing the results.[citation needed ]

Page 22: Quality Management Answers

It is known that even when a process is in control (that is, no special causes are present in the system), there is approximately a 0.27% probability of a point exceeding 3-sigma control limits. So, even an in control process plotted on a properly constructed control chart will eventually signal the possible presence of a special cause, even though one may not have actually occurred. For a Shewhart control chart using 3-sigma limits, this false alarm occurs on average once every 1/0.0027 or 370.4 observations. Therefore, the in-control average run length (or in-control ARL) of a Shewhart chart is 370.4.[citation needed ]Meanwhile, if a special cause does occur, it may not be of sufficient magnitude for the chart to produce an immediate alarm condition. If a special cause occurs, one can describe that cause by measuring the change in the mean and/or variance of the process in question. When those changes are quantified, it is possible to determine the out-of-control ARL for the chart.[citation needed ]It turns out that Shewhart charts are quite good at detecting large changes in the process mean or variance, as their out-of-control ARLs are fairly short in these cases. However, for smaller changes (such as a 1- or 2-sigma change in the mean), the Shewhart chart does not detect these changes efficiently. Other types of control charts have been developed, such as the EWMA chart , the CUSUM chart and the real - time contrasts chart , which detect smaller changes more efficiently by making use of information from observations collected prior to the most recent data point.[citation needed ]Most control charts work best for numeric data with Gaussian assumptions. The real - time contrasts chart was proposed able to handle process data with complex characteristics, e.g. high-dimensional, mix numerical and categorical, missing-valued, non-Gaussian, non-linear relationship.

[edit]CriticismsSeveral authors have criticised the control chart on the grounds that it violates the likelihood principle .[citation needed] However, the principle is itself controversial and supporters of control charts further argue that, in general, it is impossible to specify a likelihood function for a process not in statistical control, especially where knowledge about the cause system of the process is weak.[citation needed ]Some authors have criticised the use of average run lengths (ARLs) for comparing control chart performance, because that average usually follows a geometric distribution , which has high variability and difficulties.[citation needed]Some authors have criticized that most control charts focus on numeric data. Nowadays, process data can be much more complex, e.g. non-Gaussian, mix numerical and categorical, missing-valued.[8]

[edit]Types of charts

Chart Process observation Process observations relationships

Process observations

type

Size of shift to detect

and R chart Quality characteristic measurement within one

subgroup

Independent Variables Large (≥ 1.5σ)

and s chart Quality characteristic measurement within one

subgroup

Independent Variables Large (≥ 1.5σ)

Shewhart individuals control chart (ImR chart or XmR chart)

Quality characteristic measurement for one

observation

Independent Variables† Large (≥ 1.5σ)

Three - way chart Quality characteristic measurement within one

subgroup

Independent Variables Large (≥ 1.5σ)

p - chart Fraction nonconforming within one subgroup

Independent Attributes† Large (≥ 1.5σ)

np - chart Number nonconforming within Independent Attributes† Large

Page 23: Quality Management Answers

one subgroup (≥ 1.5σ)

c - chart Number of nonconformances within one subgroup

Independent Attributes† Large (≥ 1.5σ)

u - chart Nonconformances per unit within one subgroup

Independent Attributes† Large (≥ 1.5σ)

EWMA chart Exponentially weighted moving average of quality

characteristic measurement within one subgroup

Independent Attributes or variables

Small (< 1.5σ)

CUSUM chart Cumulative sum of quality characteristic measurement

within one subgroup

Independent Attributes or variables

Small (< 1.5σ)

Time series model Quality characteristic measurement within one

subgroup

Autocorrelated Attributes or variables

N/A

Regression control chart

Quality characteristic measurement within one

subgroup

Dependent of process control

variables

Variables Large (≥ 1.5σ)

Real - time contrasts chart

Sliding window of quality characteristic measurement

within one subgroup

Independent Attributes or variables

Small (< 1.5σ)

†Some practitioners also recommend the use of Individuals charts for attribute data, particularly when the assumptions of either binomially-distributed data (p- and np-charts) or Poisson-distributed data (u- and c-charts) are violated.[9] Two primary justifications are given for this practice. First, normality is not necessary for statistical control, so the Individuals chart may be used with non-normal data.[10] Second, attribute charts derive the measure of dispersion directly from the mean proportion (by assuming a probability distribution), while Individuals charts derive the measure of dispersion from the data, independent of the mean, making Individuals charts more robust than attributes charts to violations of the assumptions about the distribution of the underlying population.[11] It is sometimes noted that the substitution of the Individuals chart works best for large counts, when the binomial and Poisson distributions approximate a normal distribution. i.e. when the number of trials n > 1000 for p- and np-charts

or λ > 500 for u- and c-charts.Critics of this approach argue that control charts should not be used then their underlying assumptions are violated, such as when process data is neither normally distributed nor binomially (or Poisson) distributed. Such processes are not in control and should be improved before the application of control charts. Additionally, application of the charts in the presence of such deviations increases the type I and type II error rates of the control charts, and may make the chart of little practical use.[citation needed ]

[edit]

HistogramFrom Wikipedia, the free encyclopediaFor the histograms used in digital image processing, see Image histogram and Color histogram .

Histogram

Page 24: Quality Management Answers

One of the Seven Basic Tools of Quality

First described by Karl Pearson

Purpose To roughly assess the probability distribution of a given variable by depicting the frequencies of

observations occurring in certain ranges of values

In statistics, a histogram is a graphical representation showing a visual impression of the distribution of data. It is an estimate of theprobability distribution of a continuous variable and was first introduced by Karl Pearson .[1] A histogram consists of tabular frequencies, shown as adjacent rectangles, erected over discrete intervals (bins), with an area equal to the frequency of the observations in the interval. The height of a rectangle is also equal to the frequency density of the interval, i.e., the frequency divided by the width of the interval. The total area of the histogram is equal to the number of data. A histogram may also be normalized displaying relative frequencies. It then shows the proportion of cases that fall into each of several categories, with the total area equaling 1. The categories are usually specified as consecutive, non-overlapping intervals of a variable. The categories (intervals) must be adjacent, and often are chosen to be of the same size.[2]Histograms are used to plot density of data, and often for density estimation : estimating the probability density function of the underlying variable. The total area of a histogram used for probability density is always normalized to 1. If the length of the intervals on the x-axis are all 1, then a histogram is identical to a relative frequency plot.An alternative to the histogram is kernel density estimation , which uses a kernel to smooth samples. This will construct a smoothprobability density function, which will in general more accurately reflect the underlying variable.The histogram is one of the seven basic tools of quality control .[3]

Contents

[hide]

● 1 Etymology ● 2 Examples

○ 2.1 Shape or form of a distribution ● 3 Activities and demonstrations ● 4 Mathematical definition

○ 4.1 Cumulative histogram ○ 4.2 Number of bins and width

● 5 See also ● 6 References ● 7 Further reading

Page 25: Quality Management Answers

● 8 External links

[edit]Etymology

An example histogram of the heights of 31 Black Cherry trees.The etymology of the word histogram is uncertain. Sometimes it is said to be derived from the Greek histos 'anything set upright' (as the masts of a ship, the bar of a loom, or the vertical bars of a histogram); and gramma 'drawing, record, writing'. It is also said that Karl Pearson , who introduced the term in 1895, derived the name from "historical diagram".[4]

[edit]ExamplesThe U . S . Census Bureau found that there were 124 million people who work outside of their homes.[5] Using their data on the time occupied by travel to work, Table 2 below shows the absolute number of people who responded with travel times "at least 15 but less than 20 minutes" is higher than the numbers for the categories above and below it. This is likely due to people rounding their reported journey time.[citation needed ] The problem of reporting values as somewhat arbitrarily rounded numbers is a common phenomenon when collecting data from people.[citation needed ]

Page 26: Quality Management Answers

Histogram of travel time, US 2000 census. Area under the curve equals the total number of cases. This diagram uses Q/width from the table.

Interval Width Quantity Quantity/width

0 5 4180 836

5 5 13687 2737

10 5 18618 3723

15 5 19634 3926

20 5 17981 3596

25 5 7190 1438

30 5 16369 3273

35 5 3212 642

40 5 4122 824

45 15 9200 613

60 30 6461 215

90 60 3435 57

This histogram shows the number of cases per unit interval so that the height of each bar is equal to the proportion of total people in the survey who fall into that category. The area under the curve represents the total number of cases (124 million). This type of histogram shows absolute numbers, with Q in thousands.

Page 27: Quality Management Answers

Histogram of travel time, US 2000 census. Area under the curve equals 1. This diagram uses Q/total/width from the table.

Interval Width Quantity (Q) Q/total/width

0 5 4180 0.0067

5 5 13687 0.0221

10 5 18618 0.0300

15 5 19634 0.0316

20 5 17981 0.0290

25 5 7190 0.0116

30 5 16369 0.0264

35 5 3212 0.0052

40 5 4122 0.0066

45 15 9200 0.0049

60 30 6461 0.0017

90 60 3435 0.0005

This histogram differs from the first only in the vertical scale. The height of each bar is the decimal percentage of the total that each category represents, and the total area of all the bars is equal to 1, the decimal equivalent of 100%. The curve displayed is a simple density estimate . This version shows proportions, and is also known as a unit area histogram.In other words, a histogram represents a frequency distribution by means of rectangles whose widths represent class intervals and whose areas are proportional to the corresponding frequencies. The intervals are placed together in order to show that the data represented by the histogram, while exclusive, is also continuous. (E.g., in a histogram it is possible to have two connecting intervals of 10.5–20.5 and 20.5–33.5, but not two connecting intervals of 10.5–20.5 and 22.5–32.5. Empty intervals are represented as empty and not skipped.)[6]

[edit]Shape or form of a distribution

Page 28: Quality Management Answers

The histogram provides important informations about the shape of a distribution. According to the values presented, the histogram is either highly or moderately skewed to the left or right. A symmetrical shape is also possible, although a histogram is never perfectly symmetrical. If the histogram is skewed to the left, or negatively skewed, the tail extends further to the left. An example for a distribution skewed to the left might be the relative frequency of exam scores. Most of the scores are above 70 percent and only a few low scores occure. An example for a distribution skewed to the right or positively skewed is a histogram showing the relative frequency of housing values. A relatively small number of expensive homes create the skeweness to the right. The tail extends further to the right. The shape of a symmetrical distribution mirrors the skeweness of the left or right tail. For example the histogram of data for IQ scores. Histograms can be unimodal, bi-modal or multi-modal, depending on the dataset.[7]

[edit]Activities and demonstrationsThe SOCR resource pages contain a number of hands-on interactive activities demonstrating the concept of a histogram, histogram construction and manipulation using Java applets and charts.

[edit]Mathematical definition

An ordinary and a cumulative histogram of the same data. The data shown is a random sample of 10,000 points from a normal distribution with a mean of 0 and a standard deviation of 1.

In a more general mathematical sense, a histogram is a function mi that counts the number of observations that fall into each of the disjoint categories (known as bins), whereas the graph of a histogram is merely one way to represent a histogram. Thus, if we let n be the total number of observations and k be the total number of bins, the histogram mimeets the following conditions:

[edit]Cumulative histogramA cumulative histogram is a mapping that counts the cumulative number of observations in all of the bins up to the specified bin. That is, the cumulative histogram Mi of a histogram mj is defined as:

[edit]Number of bins and widthThere is no "best" number of bins, and different bin sizes can reveal different features of the data. Some theoreticians have attempted to determine an optimal number of bins, but these methods generally make strong assumptions about the shape of the distribution. Depending on the actual data distribution and the goals of the analysis, different

Page 29: Quality Management Answers

bin widths may be appropriate, so experimentation is usually needed to determine an appropriate width. There are, however, various useful guidelines and rules of thumb.[8]The number of bins k can be assigned directly or can be calculated from a suggested bin width h as:

The braces indicate the ceiling function .Sturges' formula[9]

which implicitly bases the bin sizes on the range of the data, and can perform poorly if n < 30.Scott's choice[10]

where σ is the sample standard deviation .Square-root choice

which takes the square root of the number of data points in the sample (used by Excel histograms and many others).Freedman–Diaconis' choice[11]

which is based on the interquartile range , denoted by IQR.Choice based on minimization of an estimated L2 risk function [12]

where and are mean and biased variance of a histogram with bin-width , and and

..

[edit]

Pareto chartFrom Wikipedia, the free encyclopedia

Pareto chart

Page 31: Quality Management Answers

Simple example of a Pareto chart using hypothetical data showing the relative frequency of reasons for arriving late at work

The left vertical axis is the frequency of occurrence , but it can alternatively represent cost or another important unit of measure. The right vertical axis is the cumulative percentage of the total number of occurrences, total cost, or total of the particular unit of measure. Because the reasons are in decreasing order, the cumulative function is a concave function. To take the example above, in order to lower the amount of late arriving by 80%, it is sufficient to solve the first three issues.The purpose of the Pareto chart is to highlight the most important among a (typically large) set of factors. In quality control, it often represents the most common sources of defects, the highest occurring type of defect, or the most frequent reasons for customer complaints, and so on. Wilkinson (2006) devised an algorithm for producing statistically-based acceptance limits (similar to confidence intervals) for each bar in the Pareto chart.These charts can be generated by simple spreadsheet programs, such as OpenOffice . org Calc and Microsoft Excel and specialized statistical software tools as well as online quality charts generators.The Pareto chart is one of the seven basic tools of quality control .[1]

[edit]

Scatter plotFrom Wikipedia, the free encyclopedia

(Redirected from Scatter diagram )

Scatter plot

Page 33: Quality Management Answers

A 3D scatter plot allows for the visualization of multivariate data of up to four dimensions. The Scatter plot takes multiple scalar variables and uses them for different axes in phase space. The different variables are combined to form coordinates in the phase space and they are displayed using glyphs and colored using another scalar variable.[1]

A scatter plot or scattergraph is a type of mathematical diagram using Cartesian coordinates to display values for two variables for a set of data.The data is displayed as a collection of points, each having the value of one variable determining the position on the horizontal axis and the value of the other variable determining the position on the vertical axis.[2] This kind of plot is also called a scatter chart, scattergram,scatter diagram or scatter graph.

Contents

[hide]

● 1 Overview ● 2 Example ● 3 See also ● 4 References ● 5 External links

[edit]OverviewA scatter plot is used when a variable exists that is under the control of the experimenter. If a parameter exists that is systematically incremented and/or decremented by the other, it is called the control parameter or independent variable and is customarily plotted along the horizontal axis. The measured or dependent variable is customarily plotted along the vertical axis. If no dependent variable exists, either type of variable can be plotted on either axis and a scatter plot will illustrate only the degree of correlation (not causation) between two variables.A scatter plot can suggest various kinds of correlations between variables with a certain confidence interval . for example, weight and height, weight would be on x axis and height would be on the y axis. Correlations may be positive (rising), negative (falling), or null (uncorrelated). If the pattern of dots slopes from lower left to upper right, it suggests a positive correlation between the variables being studied. If the pattern of dots slopes from upper left to lower right, it suggests a negative correlation. A line of best fit (alternatively called 'trendline') can be drawn in order to study the correlation between the variables. An equation for the correlation between the variables can be determined by established best-fit procedures. For a linear correlation, the best-fit procedure is known as linear regression and is guaranteed to generate a correct solution in a finite time. No universal best-fit procedure is guaranteed to generate a correct solution for arbitrary relationships. A scatter plot is also very useful when we wish to see how two comparable data sets agree with each other. In this case, an identity line , i.e., a y=x line, or an 1:1 line, is often drawn as a

Page 34: Quality Management Answers

reference. The more the two data sets agree, the more the scatters tend to concentrate in the vicinity of the identity line; if the two data sets are numerically identical, the scatters fall on the identity line exactly.One of the most powerful aspects of a scatter plot, however, is its ability to show nonlinear relationships between variables. Furthermore, if the data is represented by a mixture model of simple relationships, these relationships will be visually evident as superimposed patterns.The scatter diagram is one of the seven basic tools of quality control .[3]

[edit]ExampleFor example, to display values for "lung capacity" (first variable) and how long that person could hold his breath, a researcher would choose a group of people to study, then measure each one's lung capacity (first variable) and how long that person could hold his breath (second variable). The researcher would then plot the data in a scatter plot, assigning "lung capacity" to the horizontal axis, and "time holding breath" to the vertical axis.A person with a lung capacity of 400 ml who held his breath for 21.7 seconds would be represented by a single dot on the scatter plot at the point (400, 21.7) in the Cartesian coordinates . The scatter plot of all the people in the study would enable the researcher to obtain a visual comparison of the two variables in the data set, and will help to determine what kind of relationship there might be between the two variables.

[edit]See also● Plot ( graphics )

[edit]References1. ̂ Visualizations that have been created with VisIt . at wci.llnl.gov. Last updated: November 8, 2007.2. ̂ Utts, Jessica M. Seeing Through Statistics 3rd Edition, Thomson Brooks/Cole, 2005, pp 166-167. ISBN

0-534-39402-73. ̂ Nancy R. Tague (2004). " Seven Basic Quality Tools " . The Quality Toolbox. Milwaukee , Wisconsin :

American Society for Quality . p. 15. Retrieved 2010-02-05.

[edit]External links

Wikimedia Commons has media related to: Scatterplots

● What is a scatterplot ? ● Correlation scatter - plot matrix - for ordered - categorical data - Explanation and R code● Tool for visualizing scatter plots ● Density scatterplot for large datasets (hundreds of millions of points)

Malcolm Baldrige National Quality AwardFrom Wikipedia, the free encyclopedia

Page 35: Quality Management Answers

The Malcolm Baldrige National Quality Award recognizes U.S. organizations in the business, health care, education, and nonprofit sectors for performance excellence. The Baldrige Award is the only formal recognition of the performance excellence of both public and private U.S. organizations given by the President of the United States . It is administered by theBaldrige Performance Excellence Program , which is based at and managed by the National Institute of Standards and Technology , an agency of the U.S. Department of Commerce. Up to 18 awards may be given annually across six eligibility categories—manufacturing, service, small business, education, health care, and nonprofit. As of 2010, 91 organizations had received the award.The Baldrige National Quality Program and the associated award were established by the Malcolm Baldrige National Quality Improvement Act of 1987 (Public Law 100–107). The program and award were named for Malcolm Baldrige , who served as United States Secretary of Commerce during the Reagan administration, from 1981 until Baldrige’s 1987 death in arodeo accident. In 2010, the program's name was changed to the Baldrige Performance Excellence Program to reflect the evolution of the field of quality from a focus on product, service, and customer quality to a broader, strategic focus on overall organizational quality—called performance excellence.[1]The award promotes awareness of performance excellence as an increasingly important element in competitiveness. It also promotes the sharing of successful performance strategies and the benefits derived from using these strategies. To receive a Baldrige Award, an organization must have a role-model organizational management system that ensures continuous improvement in delivering products and/or services, demonstrates efficient and effective operations, and provides a way of engaging and responding to customers and other stakeholders. The award is not given for specific products or services.

Contents

[hide]

● 1 Criteria for Performance Excellence ● 2 Early History of the Baldrige Program ● 3 Program Impacts ● 4 Public - Private Partnership ● 5 Baldrige Award Recipients ● 6 See also ● 7 References

[edit]Criteria for Performance ExcellenceThe Baldrige Criteria for Performance Excellence serve two main purposes: (1) to identify Baldrige Award recipients that will serve as role models for other organizations and (2) to help organizations assess their improvement efforts, diagnose their overall performance management system, and identify their strengths and opportunities for improvement. In addition, the Criteria help strengthen U.S. competitiveness by

● improving organizational performance practices, capabilities, and results● facilitating communication and sharing of information on best practices among U.S. organizations of all types● serving as a tool for understanding and managing performance and for guiding planning and opportunities for

learningThe Baldrige Criteria for Performance Excellence provide organizations with an integrated approach to performance management that results in

● delivery of ever-improving value to customers and stakeholders, contributing to organizational sustainability● improved organizational effectiveness and capabilities● organizational and personal learning

The following three sector-specific versions of the Criteria, which are revised every two years, are available for free from the Baldrige Program:

● Criteria for Performance Excellence ● Education Criteria for Performance Excellence ● Health Care Criteria for Performance Excellence

[edit]Early History of the Baldrige Program● In the early and mid-1980s, many U.S. industry and government leaders saw that a renewed emphasis on

quality was necessary for doing business in an ever-expanding and more competitive world market. But many U.S. businesses either did not believe quality mattered for them or did not know where to begin.

Page 36: Quality Management Answers

● The Malcolm Baldrige National Quality Improvement Act of 1987, signed into law on August 20, 1987, was developed through the actions of the National Productivity Advisory Committee, chaired by Jack Grayson . The nonprofit research organization APQC, founded by Grayson, organized the first White House Conference on Productivity, spearheading the creation of the Malcolm Baldrige National Quality Award in 1987. The Baldrige Award was envisioned as a standard of excellence that would help U.S. organizations achieve world-class quality.

● In the late summer and fall of 1987, Dr. Curt Reimann, the first director of the Malcolm Baldrige National Quality Program, and his staff at the National Institute of Standards and Technology (NIST) developed an award implementation framework, including an evaluation scheme, and advanced proposals for what is now the Baldrige Award.

● In its first three years, the Baldrige Award was jointly administered by APQC and the American Society for Quality, which continues to assist in administering the award program under contract to NIST.

[edit]Program Impacts● According to Building on Baldrige: American Quality for the 21st Century by the private Council on

Competitiveness, “More than any other program, the Baldrige Quality Award is responsible for making quality a national priority and disseminating best practices across the United States.”

● The Baldrige Program's net private benefits to the economy as a whole were conservatively estimated at $24.65 billion. When compared to the program's social costs of $119 million, the program’s social benefit-to-cost ratio was 207-to-1.[2]

● In 2007, 2008, 2009, and 2010, Leadership Excellence magazine placed the Baldrige Program in the top 10 best government/military leadership programs in the United States based on seven criteria: vision/mission, involvement/participation, accountability/measurement, content/curriculum, presenters/presentations, take-home value/results for customers, and outreach of the programs and products.

● Since the program’s inception in 1987, more than 2 million copies of the business/nonprofit, education, and health care versions of the Criteria for Performance Excellence have been distributed to individuals and organizations in the United States and abroad. In 2010, more than 2.1 million copies of the Criteria were accessed or downloaded from the Baldrige Web site.

[edit]Public-Private PartnershipThe Baldrige Award is supported by a distinctive public-private partnership. The following organizations and entities play a key role:

● The Foundation for the Malcolm Baldrige National Quality Award raises funds to permanently endow the award program.

● The National Institute of Standards and Technology (NIST), an agency of the U.S. Department of Commerce, manages the Baldrige Program.

● The American Society for Quality (ASQ) assists in administering the award program under contract to NIST.● The Board of Overseers advises the Department of Commerce on the Baldrige Program.● Members of the Board of Examiners —consisting of leading experts from U.S. businesses and education,

health care, and nonprofit organizations—volunteer their time to evaluate award applications and prepare feedback reports for applicant organizations. Board members also share information about the program in their professional, trade, community, and state organizations. The Panel of Judges , part of the Board of Examiners, makes award recommendations to the director of NIST.

● The network of state, regional, and local Baldrige-based award programs known as the Alliance for Performance Excellence provides potential award applicants and examiners, promotes the use of the Criteria, and disseminates information on the award process and concepts.

● Award recipients share information on their successful performance and quality strategies with other U.S. organizations.

[edit]Baldrige Award Recipients

Year Award Recipient Sector

2010

MEDRAD, Warrendale, PA manufacturing

Page 37: Quality Management Answers

Nestlé Purina PetCare Co., St. Louis, MO manufacturing

Freese and Nichols Inc., Fort Worth, TX small business

K&N Management, Austin, TX small business

Studer Group, Gulf Breeze, FL small business

Advocate Good Samaritan Hospital, Downers Grove, IL health care

Montgomery County Public Schools, Rockville, MD education

2009

Honeywell Federal Manufacturing & Technologies, Kansas City, MO manufacturing

MidwayUSA, Columbia, MO small business

AtlantiCare, Egg Harbor Township, NJ health care

Heartland Health, St. Joseph, MO health care

VA Cooperative Studies Program Clinical Research Pharmacy Coordinating Center, Albuquerque, NM

nonprofit

2008

Cargill Corn Milling North America, Wayzata, MN manufacturing

Poudre Valley Health System, Fort Collins, CO health care

Iredell-Statesville Schools, Statesville, NC education

2007

PRO-TEC Coating Co., Leipsic, OH small business

Mercy Health System, Janesville, WI health care

Sharp Healthcare, San Diego, CA health care

City of Coral Springs, Coral Springs, FL nonprofit

U.S. Army Armament Research, Development and Engineering Center (ARDEC), Picatinny Arsenal, NJ

nonprofit

2006

MESA Products, Inc., Tulsa, OK small business

Page 38: Quality Management Answers

Premier Inc., San Diego, CA service

North Mississippi Medical Center, Tupelo, MS health care

2005

Sunny Fresh Foods, Inc., Monticello, MN manufacturing

DynMcDermott Petroleum Operations, New Orleans, LA service

Park Place Lexus, Plano, TX small business

Richland College, Dallas, TX education

Jenks Public Schools, Jenks, OK education

Bronson Methodist Hospital, Kalamazoo, MI health care

2004

The Bama Companies, Tulsa, OK manufacturing

Texas Nameplate Company, Inc., Dallas, TX small business

Kenneth W. Monfort College of Business, Greeley, CO education

Robert Wood Johnson University Hospital Hamilton, Hamilton, NJ health care

2003

Medrad, Inc., Indianola, PA manufacturing

Boeing Aerospace Support, St. Louis, MO service

Caterpillar Financial Services Corp., Nashville, TN service

Stoner Inc., Quarryville, PA small business

Community Consolidated School District 15, Palatine, IL education

Baptist Hospital, Inc., Pensacola, FL health care

Saint Luke’s Hospital of Kansas City, Kansas City, MO health care

2002

Motorola Inc. Commercial, Government and Industrial Solutions Sector, Schaumburg, IL

manufacturing

Branch-Smith Printing Division, Fort Worth, TX small business

Page 39: Quality Management Answers

SSM Health Care, St. Louis, MO health care

2001

Clarke American Checks, Incorporated, San Antonio, TX manufacturing

Pal’s Sudden Service, Kingsport, TN small business

Chugach School District, Anchorage, AK education

Pearl River School District, Pearl River, NY education

University of Wisconsin–Stout, Menomonie, WI education

2000

Dana Corp.-Spicer Driveshaft Division, Toledo, OH manufacturing

KARLEE Company, Inc., Garland, TX manufacturing

Operations Management International, Inc., Greenwood Village, CO service

Los Alamos National Bank, Los Alamos, NM small business

1999

STMicroelectronics, Inc.-Region Americas, Carrollton, TX manufacturing

BI Performance Services, Minneapolis, MN service

The Ritz-Carlton Hotel Company, L.L.C., Atlanta, GA service

Sunny Fresh Foods, Monticello, MN small business

1998

Boeing Airlift and Tanker Programs, Long Beach, CA manufacturing

Solar Turbines Inc., San Diego, CA manufacturing

Texas Nameplate Company, Inc., Dallas, TX small business

1997

3M Dental Products Division, St. Paul, MN manufacturing

Solectron Corp., Milpitas, CA manufacturing

Merrill Lynch Credit Corp., Jacksonville, FL service

Xerox Business Services, Rochester, NY service

Page 40: Quality Management Answers

1996

ADAC Laboratories , Milpitas, CA manufacturing

Dana Commercial Credit Corp., Toledo, OH service

Custom Research Inc., Minneapolis, MN small business

Trident Precision Manufacturing Inc., Webster, NY small business

1995

Armstrong World Industries’ Building Products Operation, Lancaster, PA manufacturing

Corning Telecommunications Products Division, Corning, NY manufacturing

1994

AT&T Consumer Communications Services, Basking Ridge, NJ service

GTE Directories Corp., Dallas/Ft. Worth, TX service

Wainwright Industries Inc., St. Peters, MO small business

1993

Eastman Chemical Co., Kingsport, TN manufacturing

Ames Rubber Corp., Hamburg, NJ small business

1992

AT&T Network Systems Group/Transmission Systems Business Unit, Morristown, NJ

manufacturing

Texas Instruments Inc. Defense Systems & Electronics Group, Dallas, TX manufacturing

AT&T Universal Card Services, Jacksonville, FL service

The Ritz-Carlton Hotel Co., Atlanta, GA service

Granite Rock Co., Watsonville, CA small business

1991

Solectron Corp., Milpitas, CA manufacturing

Zytec Corp., Eden Prairie, MN manufacturing

Marlow Industries, Dallas, TX small business

Page 41: Quality Management Answers

1990

Cadillac Motor Car Division, Detroit, MI manufacturing

IBM Rochester , Rochester, MN manufacturing

Federal Express Corp., Memphis, TN service

Wallace Co. Inc., Houston, TX small business

1989

Milliken & Co., Spartanburg, SC manufacturing

Xerox Corp. Business Products and Systems, Rochester, NY manufacturing

1988

Motorola Inc., Schaumburg, IL manufacturing

Commercial Nuclear Fuel Division of Westinghouse Electric Corp., Pittsburgh, PA

manufacturing

Globe Metallurgical Inc., Beverly, OH small business

Quality auditFrom Wikipedia, the free encyclopedia

This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources . Unsourced material may be challenged and removed. (July

2007)

Software Testing portal

Quality audit is the process of systematic examination of a quality system carried out by an internal or external quality auditor or an audit team. It is an important part of organization's quality management system and is a key element in the ISO quality system standard, ISO 9001 .

Quality audits are typically performed at predefined time intervals and ensure that the institution has clearly defined internal system monitoring procedures linked to effective action. This can help determine if the organization complies with the defined quality system processes and can involve procedural or results-based assessment criteria.

Page 42: Quality Management Answers

With the upgrade of the ISO 9000 series of standards from the 1994 to 2008 series, the focus of the audits has shifted from purely procedural adherence towards measurement of the actual effectiveness of the Quality Management System (QMS) and the results that have been achieved through the implementation of a QMS.

Audits are an essential management tool to be used for verifying objective evidence of processes, to assess how successfully processes have been implemented, for judging the effectiveness of achieving any defined target levels, to provide evidence concerning reduction and elimination of problem areas. For the benefit of the organisation, quality auditing should not only report non-conformances and corrective actions, but also highlight areas of good practice. In this way other departments may share information and amend their working practices as a result, also contributing to continual improvement.

Quality audits can be an integral part of compliance or regulatory requirements. One example is the US Food and Drug Administration , which requires quality auditing to be performed as part of its Quality System Regulation (QSR) for medical devices (Title 21 of the US Code of Federal Regulations part 820[1]).

Several countries have adopted quality audits in their higher education system (New Zealand, Australia, Sweden, Finland, Norway and USA) [2] Initiated in the UK, the process of quality audit in the education system focused primarily on procedural issues rather than on the results or the efficiency of a quality system implementation.

Audits can also be used for safety purposes. Evans & Parker (2008) describe auditing as one of the most powerful safety monitoring techniques and 'an effective way to avoid complacency and highlight slowly deteriorating conditions', especially when the auditing focuses not just on compliance but effectiveness. [3]

The processes and tasks that a quality audit involves can be managed using a wide variety of software and self-assessment tools. Some of these relate specifically to quality in terms of fitness for purpose and conformance to standards, while others relate to Quality costs or, more accurately, to the Cost of poor quality . In analyzing quality costs, a cost of quality audit can be applied across any organization rather than just to conventional production or assembly processes[4]

[edit]

Quality circleFrom Wikipedia, the free encyclopedia

This article may require cleanup to meet Wikipedia's quality standards . (Consider using more specific cleanup instructions .) Please help improve this article if you can. The talk page may

contain suggestions. (September 2009)

A quality circle is a volunteer group composed of workers (or even students), usually under the leadership of their supervisor (but they can elect a team leader), who are trained to identify, analyze and solve work-related problems and present their solutions to management in order to improve the performance of the organization, and motivate and enrich the work of employees. When matured, true quality circles become self-managing, having gained the confidence of management.Quality circles are an alternative to the dehumanizing concept of the division of labor, where workers or individuals are treated like robots. They bring back the concept of craftsmanship, which when operated on an individual basis is uneconomic but when used in group form can be devastatingly powerful. Quality circles enable the enrichment of the lives of the workers or students and creates harmony and high performance. Typical topics are improving occupational safety and health , improving product design , and improvement in the workplace andmanufacturing processes.The term quality circles derives from the concept of PDCA (Plan, Do, Check, Act) circles developed by Dr. W . Edwards Deming .Quality circles are not normally paid a share of the cost benefit of any improvements but usually a proportion of the savings made is spent on improvements to the work environment.[citation needed ]

Page 43: Quality Management Answers

They are formal groups. They meet at least once a week on company time and are trained by competent persons (usually designated as facilitators) who may be personnel and industrial relations specialists trained in human factors and the basic skills of problem identification, information gathering and analysis, basic statistics, and solution generation.[1] Quality circles are generally free to select any topic they wish (other than those related to salary and terms and conditions of work, as there are other channels through which these issues are usually considered).[2] [3] Quality circles have the advantage of continuity; the circle remains intact from project to project. (For a comparison to Quality Improvement Teams, see Juran's Quality by Design .[4]

Contents

[hide]

● 1 History ● 2 Student quality circles ● 3 See also ● 4 References

[edit]HistoryQuality circles were first established in Japan in 1962; Kaoru Ishikawa has been credited with their creation. The movement in Japan was coordinated by the Japanese Union of Scientists and Engineers (JUSE). The first circles were established at the Nippon Wireless and Telegraph Company but then spread to more than 35 other companies in the first year.[5] By 1978 it was claimed that there were more than one million quality circles involving some 10 million Japanese workers.[citation needed ] They are now in most East Asian countries; it was recently claimed that there were more than 20 million quality circles in China.[citation needed ]Quality circles have been implemented even in educational sectors in India, and QCFI (Quality Circle Forum of India) is promoting such activities. However this was not successful in the United States, as it (was not properly understood and) turned out to be a fault-finding exercise although some circles do still exist. ref Don Dewar who together with Wayne Ryker and Jeff Beardsley first established them in 1972 at the Lockheed Space Missile Factory in California.There are different quality circle tools, namely:

● The Ishikawa or fishbone diagram - which shows hierarchies of causes contributing to a problem● The Pareto Chart - which analyses different causes by frequency to illustrate the vital cause,● Process Mapping, Data gathering tools such as Check Sheets and graphical tools such as histograms,

frequency diagrams, spot charts and pie charts

[edit]Student quality circlesStudent quality circles work on the original philosophy of Total Quality Management [6] . The idea of SQCs was presented by City Montessori School (CMS) Lucknow India at a conference in Hong Kong in October 1994. It was developed and mentored by duo engineers of Indian Railways PC Bihari and Swami Das in association with Principal Dr. Kamran of CMS Lucknow India. They were inspired and facilitated by Jagdish Gandhi, the founder of CMS after his visit to Japan where he learned about Kaizen. The world's first SQC was made in CMS Lucknow with then 13-year- old student, Ms. Sucheta Bihari as its leader. CMS conducts international conventions on student quality circles which it has repeated every 2 years to the present day. After seeing its utility, the visionary educationalists from many countries started these circles. The World Council for Total Quality & Excellence in Education was established in 1999 with its Corporate Office in Lucknow and head office at Singapore. It monitors and facilitates student quality circle activities to its member countries which are more than a dozen. SQCs are considered to be a co-curricular activity. The have been established in India, Bangladesh, Pakistan, Nepal, Sri Lanka, Turkey, Mauritius, Iran, UK (Kingston University), and USA. In Nepal, Prof. Dinesh P. Chapagain has been promoting this innovative approach through QUEST-Nepal since 1999. He has written a book entitled "A Guide Book on Students' Quality Circle: An Approach to prepare Total Quality People", which is considered a standard guide to promote SQCs in academia for students' personality development.[citation needed ]

Page 44: Quality Management Answers