verification Vs validation.pdf

5
For Life Science Professionals Process Verification vs. Process Validation: What You Need to Know by Andrew Snow, Momentum Solutions, LLC and Walt Murray, MasterControl, Inc. Note: The views expressed in this article are those of the authors and do not necessarily represent those of their respective employers, GxP Lifeline, its editor or MasterControl, Inc. Process validation officially became part of the FDA's Quality Systems Regulation in 1997. Fifteen years later medical device manufacturers still struggle with determining which processes require validation. The confusion traces back to two words, "fully verified." What does "fully verified" mean? What do I do if I determine that process can't be "fully verified?" And, equally important, what are the FDA's expectations when I can't? The answer to the "fully verify" question can have big consequences. Failure to identify a process that requires validation can cause compliance issues, including warning letters, delays in pre-market submissions and field actions. A conservative approach of validating everything can be costly. And worse yet, some people erroneously think that if they can fully verify something, the verification has to be 100%. This approach, unless automated, may be a statistically invalid approach because even manual 100% inspection is statistically not 100% effective. The goals of this article are: To clarify how to determine if something can be fully verified How to plan validation and process controls for those that can't be fully verified; and How to monitor and control processes using a risk-based approach This will be accomplished by: Revisiting some definitions of key terms and the regulatory requirements Describing how we determine what's critical to quality (CTQ) Dissecting the terms "fully verified" and describing the criteria for determining if something is fully verifiable Describing how to plan validation for processes that can't be fully verified Outlining a risked-based approach for controlling processes that can be fully verified Definitions and Regulatory Requirements The requirements for process validation established in the FDA's Quality Systems Regulation states in Part 820.75 (a) 1 : "Where the results of a process cannot be fully verified by subsequent inspection and test, the process shall be validated with a high degree of assurance and approved according to established procedures."  A clear intent of the regulation is that quality c annot be inspected into a product. This has been firmly established in qu ality thinking. In essence, this requirement is stating that for some processes, inspection alone may not be sufficient. This is particularly true when the defect only becomes apparent or manifests after the product is in distribution or use. This could be because verification is not possible (e.g., reliability or durability) or it is not sufficient (e.g., requires destructive testing). The core of this requirement is that in order to gain confidence that the likelihood of these types of defects is small, we must use the capabilities of a process. This is clarified when we look at the definition of process validation in the regulation which states process validation is 2 : "...establishing by objective evidence that a process consistently produces a result or product meeting its predetermined specifications." Thus, we'll see that this requires understanding the sources of variation, reduction and control of variation to establish a process that is capable of consistently meeting specifications. Such specifications and their associated acceptance criteria are determined using adequate design controls. Design Controls and Process Validation Before we dissect the term fully verified  we need to go back and see how we determine what specifications are of concern. This determination should be established using effective design controls. Most professionals typically make the connection between design transfer and process validation while they overlook essential design outputs, which are the key design outputs that we must produce. So, one of the critical connections between process validation and design controls is that design outputs define the "predetermined specifications" or the results a process needs to meet. One of the requirements for documenting design outputs is that they contain or reference acceptance criteria. These are sometimes referred to as critical to quality (CTQ) characteristics. In this sense, quality is defined as "the totality of features and characteristics that bear on the ability of a device to satisfy fitness for-use, including safety and performance." 3  These features and characteristics form the basis for acceptance criteria. While there are many tools that can be used for developing CTQs, such as Design for Six Sigma (DFSS), Quality Function Deployment (QFD), and focus groups, from an FDA perspective a risk-based approach is imperative. Using such process tools as Fault Tree Analysis (FTA) or Design Failure Mode Effects Analysis (DFMEA) when linking product hazards to specifications is a thorough approach to effective post-control measures . So, for example, in DFMEA we ask: what are the effects of a design failing to meet its design requirements? If the failure results in a safety hazard to a patient or end user, then that specification becomes a CTQ. It is these characteristics that we'll focus on when asking if they can be fully verified. Dissecting Fully Verified To help understand the term "fully verified," we can go to the Global Harmonization Task Force's (GHTF) Process Validation Guidance document 4 . This  A clear intent of the regulation is that quality cannot be inspected into a product. This has been firmly established in quality thinking. In essence, this requirement is stating that for some processes, inspection alone may not be sufficient.

Transcript of verification Vs validation.pdf

Page 1: verification Vs validation.pdf

7/27/2019 verification Vs validation.pdf

http://slidepdf.com/reader/full/verification-vs-validationpdf 1/5

For Life Science ProfessionalsProcess Verification vs. Process Validation: What You Need to Knowby Andrew Snow, Momentum Solutions, LLC and Walt Murray, MasterControl, Inc.

Note: The views expressed in this article are those of the authors and do not necessarily represent those of their respective employers, GxP Lifeline, itseditor or MasterControl, Inc.

Process validation officially became part of the FDA's Quality Systems Regulation in 1997. Fifteen years later medical device manufacturers stillstruggle with determining which processes require validation. The confusion traces back to two words, "fully verified." What does "fully verified" mean?What do I do if I determine that process can't be "fully verified?" And, equally important, what are the FDA's expectations when I can't?

The answer to the "fully verify" question can have big consequences. Failure to identify a process that requires validation can cause compliance issues,

including warning letters, delays in pre-market submissions and field actions. A conservative approach of validating everything can be costly. And worseyet, some people erroneously think that if they can fully verify something, the verification has to be 100%. This approach, unless automated, may be astatistically invalid approach because even manual 100% inspection is statistically not 100% effective.

The goals of this article are:

To clarify how to determine if something can be fully verifiedHow to plan validation and process controls for those that can't be fully verified; andHow to monitor and control processes using a risk-based approachThis will be accomplished by:

Revisiting some definitions of key terms and the regulatory requirementsDescribing how we determine what's critical to quality (CTQ)Dissecting the terms "fully verified" and describing the criteria for determining if something is fullyverifiableDescribing how to plan validation for processes that can't be fully verifiedOutlining a risked-based approach for controlling processes that can be fully verifiedDefinitions and Regulatory Requirements

The requirements for process validation established in the FDA's Quality Systems Regulation states in

Part 820.75 (a)1:

"Where the results of a process cannot be fully verified by subsequent inspection and test, the processshall be validated with a high degree of assurance and approved according to established procedures."

 A clear intent of the regulation is that quality cannot be inspected into a product. This has been firmly established in quality thinking. In essence, thisrequirement is stating that for some processes, inspection alone may not be sufficient. This is particularly true when the defect only becomes apparentor manifests after the product is in distribution or use. This could be because verification is not possible (e.g., reliability or durability) or it is not sufficient(e.g., requires destructive testing). The core of this requirement is that in order to gain confidence that the likelihood of these types of defects is small,we must use the capabilities of a process. This is clarified when we look at the definition of process validation in the regulation which states processvalidation is 2:

"...establishing by objective evidence that a process consistently produces a result or product meeting its predetermined specifications."

Thus, we'll see that this requires understanding the sources of variation, reduction and control of variation to establish a process that is capable of consistently meeting specifications. Such specifications and their associated acceptance criteria are determined using adequate design controls.

Design Controls and Process Validation

Before we dissect the term fully verified we need to go back and see how we determine what specifications are of concern. This determination should beestablished using effective design controls. Most professionals typically make the connection between design transfer and process validation while theyoverlook essential design outputs, which are the key design outputs that we must produce. So, one of the critical connections between processvalidation and design controls is that design outputs define the "predetermined specifications" or the results a process needs to meet.

One of the requirements for documenting design outputs is that they contain or reference acceptance criteria. These are sometimes referred to ascritical to quality (CTQ) characteristics. In this sense, quality is defined as "the totality of features and characteristics that bear on the ability of a deviceto satisfy fitness for-use, including safety and performance." 3 These features and characteristics form the basis for acceptance criteria.

While there are many tools that can be used for developing CTQs, such as Design for Six Sigma (DFSS), Quality Function Deployment (QFD), andfocus groups, from an FDA perspective a risk-based approach is imperative. Using such process tools as Fault Tree Analysis (FTA) or Design FailureMode Effects Analysis (DFMEA) when linking product hazards to specifications is a thorough approach to effective post-control measures. So, for example, in DFMEA we ask: what are the effects of a design failing to meet its design requirements? If the failure results in a safety hazard to a patientor end user, then that specification becomes a CTQ. It is these characteristics that we'll focus on when asking if they can be fully verified.

Dissecting Fully Verified

To help understand the term "fully verified," we can go to the Global Harmonization Task Force's (GHTF) Process Validation Guidance document4. This

 A clear intent of the regulation

is that quality cannot be

inspected into a product. This

has been firmly established in

quality thinking. In essence,

this requirement is stating that 

for some processes, inspection

alone may not be sufficient.

Page 2: verification Vs validation.pdf

7/27/2019 verification Vs validation.pdf

http://slidepdf.com/reader/full/verification-vs-validationpdf 2/5

ocumen s ows a ec s on ree w c may e e p u n e ermn ng w c process s ou e va a e . en we oo a gure - , ere are wofundamental questions to be answered: is the output (CTQ) verifiable and is verification sufficient and cost effective?

Most quality characteristics can be verified. Verification methods run the gamut from chemical tests such as pH or conductivity, physical tests such astensile strength, dimension, or hardness, to electrical measurements such as voltage and impedance. In some cases, the measurements can't easily bedone on routine production. Thus such dimensions are not effectively "detectable" and therefore, not capable of reliability or durability. This is the mainconcern for detectability as a key component of an FMEA. In other cases, the verifications are not sufficient because the method is destructive andreasonable sampling schemes may not be sufficient. One simple test to apply is asking if sampling "could be" 100% and in the case of destructivemethod the answer would be no. Finally, some tests (detection) lack the sensitivity to be effective as part of routine production.

Figure 1 - GHTF Process Validation Decision Tree

The cost effectiveness of the verification can also be an important consideration. In many cases, it may be more prudent to validate the process upfrontto understand and control variation, thereby improving process capabilities, increasing yields and lowering scrap. These factors generally will outweigheven reducing the cost of inspection, making a strong business case for validation.

The GHTF Guidance gives us several examples of processes that should be validated, including:

Sterilization processes Aseptic fillingSterile package sealingLyophilizationHeat treatingPlating processesPlastic injection molding

It's worth noting that the guidance says "should be." It is incumbent upon the manufacturer to understand the CTQs for its product through Quality byDesign (QbD) and assess whether verification is sufficient. However, the FDA will be expecting that these processes will be validated, so if youdetermine that verification is sufficient, your rationale will need to provide appropriate documentation.

Planning for Validation

The best place to document your validation decisions is in the Master Validation Plan (MVP). While not required, it is best practice to document your decisions and outline the plan for processes that will need to be validated accordingly. The plan should scope out the validation effort, including thefacilities, processes, and products covered by the plan. All the equipment under that scope of the plan should be identified, including any utilities suchas electrical, water, and air compressors. It should also include the process equipment and any environmental controls such as clean rooms or Electro-static Discharge (ESD) controls. It is also common to include test methods and software used to automate processes . Plans will spell out theInstallation, Operational and Performance Qualifications (i.e., IQ, OQ and PQ) to be done for each process. It is also common to see test methods andsoftware used to automate processes included in the lifecycle of the product, although these may not be stated as IQ, OQ or PQ requirements.

The installation qualification shows that the equipment is installed according to its specifications. This document is typically comprised of checklists andsimple verifications such as fixture inspections, gauge calibration or preventive maintenance checks. If automated software is used in the process, the IQwill check to make sure the right version is installed and validated. One can look at the requirements in Production and Process Controls Part 820.70 5

to use as a basis for making a good IQ checklist.

The OQ and PQ are the heart and soul of process validation. Once assured that the equipment is installed to specification, the manufacturer hasgreater confidence that the equipment is operating properly and can start to use the equipment to understand the sources of variation and work towardsestablishing a capable process. The goal of process characterization during the OQ is to understand the effects process inputs (e.g., temperature, time,and pressure) have on the outputs (e.g., burst strength for a sterile package seal).

GHTF Guidance Figures 4, 5, and 6 help illustrate the concepts. These are adopted below in Figures 2 and 3.

Page 3: verification Vs validation.pdf

7/27/2019 verification Vs validation.pdf

http://slidepdf.com/reader/full/verification-vs-validationpdf 3/5

Figure 2 - Moving from an Unstable to a Stable Process during the OQ

The goal of the OQ is to understand what causes the instability in GHTF Figure 4 and reduce and control that variation to produce the stable process onthe right. We should conclude through conformation runs that the stable process has the potential to meet our capability requirements. These results aretypically achieved through designed experiments which seek to explore a wide range of possible input variables through screening experiments andthen refine and optimize the most significant variables to produce a stable process with acceptable process potential capabilities. While it is tempting tocut straight to the vital few variables based on knowledge and experience to eliminate some of the experimental steps, these assumptions should becarefully documented in a risk-based process model and supported by scientific and historical data that clearly shows the relationships between inputand output variables. A high-risk consequence condition can trump a Pareto approach.

The goal of the PQ is to show that process is capable under conditions anticipated during manufacturing. These conditions may include multiple shifts,

operators, material lots and other factors that represent potential sources of uncontrollable variation. The purpose of process validation is not so muchto show you have excellent process capabilities but to demonstrate you know why you have excellent process capabilities.

Once process validation is completed, the manufacturer is required to establish monitoring and control to ensure the validated state of control ismaintained (Part 820.75(b)6). The manufacturer should document that the validated process was performed by qualified operators and note themonitoring and control methods, data, date performed, the individuals performing the process, and the major equipment used.

Figure 3 - Demonstrating a Capable Process during the PQ

Risk-Based Approaches to Monitoring and Controlling Processes

Whether you have validated a process or determined that verification is sufficient process, monitoring and control are required. For validated processes,this is established in Part 820.75(b) and for all processes it is established in Part 820.70(a):

"Where deviations from device specifications could occur as a result of the manufacturing process, the manufacturer shall establish and maintainprocess control procedures that describe any process controls necessary to ensure conformance to specifications. Where process controls are neededthey shall include among other things 'monitoring and control of device parameters and component and device characteristics during production.'"

Fundamental validation concepts are the same, while additional requirements may be commensurate with a validated process during monitoring andcontrol. A risk-based approach is used for such special requirements which can then be developed as part of a control plan based on a process FMEA.

Page 4: verification Vs validation.pdf

7/27/2019 verification Vs validation.pdf

http://slidepdf.com/reader/full/verification-vs-validationpdf 4/5

you on va a e, o you ave o o nspec on n or er o u y ver y o - e regu a on oesn say s. oes say . eredeviations from device specifications could occur as a result of the manufacturing process, the manufacturer shall establish and maintain process controprocedures that describe any process controls necessary to ensure conformance to specifications." Where process controls are needed, they shallinclude, among other things, "monitoring and control of device parameters and component and device characteristics during production." It also tells usin 820.80 that we need to establish in-process and final acceptance activities. Finally in 820.250, statistical techniques are delineated - "Whereappropriate, each manufacturer shall establish and maintain procedures for identifying valid statistical techniques required for establishing, controlling,and verifying the acceptability of process capability and product characteristics."

There has been at least one warning letter for not 100% testing if not validating but interpreting requirements based on one or two warning letters isdifficult. In 2009, the Cincinnati District office of the FDA issued a warning letter to Hammill. The warning letter states that Hammill's response to notvalidating certain cleaning, passivation processes and CNC equipment because they did in-process and final inspections/test was "inadequate becauseyou are not testing every device to assure it meets specifications." 7 Of course. Reading the warning letter without the 483, let alone the EstablishmentInspection Report, is difficult. We don't know about the acceptance criteria (CTQs), how they were measured, whether the testing was destructive andhence, whether subsequent inspection or tests were sufficient. So it is difficult to make the leap to a requirement that all processes that are not validated

must be 100% tested. In the end the regulation clearly states that process controls should be based on appropriate statistics, which require someknowledge of risks in order to be applied properly.

 An example will help illustrate how to establish controls based on risk and the significant role validation plays in establishing the basis for control. For example, the CTQ for burst strength is tied to the risk associated with non-sterile packaging due to package seal failure as shown in Figure - 4. Toachieve an acceptable risk level, we had to establish better than six sigma process capabilities during process validation. The trick is to establishprocess monitoring that has a low risk of not detecting a shift to an unacceptable level of process capabilities of five sigma, or a greater likelihood of generating defective seals. Using statistical process control (SPC) we establish a control plan that indicates an effective control measure of burst testswill be performed on a sample of 25 units every hour of production. We are concerned with a 1.0 sigma shift in the process average burst strength asthis would reduce risk to an unacceptable level. We can calculate the beta risk or risk of not detecting the shift using the beta risk formula below8

β = φ(L-kûn) - φ(-L-kûn)

Where:

β = Beta Risk L = The number of sigma in the lower and upper control limits, typically 3φ() = Cumulative Standard Normal Distributionk = Process shift in sigma unitsû = square root of 

n = Sample size

For our example above, if we assume the typical binomial three sigma limits for the control limits and a desire to detect a 1.0 sigma shift, the beta risk isestimated at 0.002, giving us high probability of detection or 2 versus a 1 almost certain to detect. This, along with the occurrence ranking of 1, gives usacceptable risk. From this we can see that the combination of acceptable process capabilities and process monitoring are required to make riskacceptable.

This is just one example of how to estimate the beta risk and set SPC control measures. Average run length, or the average number of samplesnecessary to detect a shift can also be used, which helps bring sample frequency into the equation.

Figure 4 - Example Risk-based Control Plan for Burst Strength

Conclusion

The key to understanding whether a process requires validation is to understand if it is verifiable and assessing the sufficiency of verification. It is bestpractice to document these decisions and plan the validation effort. Some tests cannot be done on a routine basis, are destructive or lack the sensitivityto be sufficient. The heart of validating a process is ensuring it is installed to specification, while characterized and optimized to be under control andcapable of consistently meeting specifications. The risk-based approach shows that understanding how to achieve excellent process capabilitiesreduces the likelihood of defects in the first place. In addition, process monitoring with low beta risk assures detectability if there are problems to beaddressed as necessary. Verification alone may not be sufficient to produce acceptable levels of risk.

Page 5: verification Vs validation.pdf

7/27/2019 verification Vs validation.pdf

http://slidepdf.com/reader/full/verification-vs-validationpdf 5/5

References

1. Medical Devices; Current Good Manufacturing Practice (CGMP) Final Rule; Quality System Regulation, Federal Register, Vol. 61, No. 195, Monday,October 7, 1996, Rules and Regulations, p526592. Medical Devices; Current Good Manufacturing Practice (CGMP) Final Rule; Quality System Regulation, Federal Register, Vol. 61, No. 195, Monday,October 7, 1996, Rules and Regulations, p526563. Id .4. Quality Management Systems - Process Validation Guidance, GHTF/SG3/N99-10:2004 (Edition 2)5. Medical Devices; Current Good Manufacturing Practice (CGMP) Final Rule; Quality System Regulation, Federal Register, Vol. 61, No. 195, Monday,October 7, 1996, Rules and Regulations, p526586. Supra note 1.7. Food and Drug Administration, Warning Letter, Hammill Manufacturing Company, 01/06/098. Introduction to Statistical Quality Control, 5th Ed., Douglas C. Montgomery, John Wiley & Sons, 2005, p 217.

 Andrew Snow has more than 25 years' experience directing and leading operations, quality, and process design-development for medical devicecompanies, contract manufacturers, suppliers and several successful start-up ventures. He is president of Momentum Solutions, LLC a consulting firmspecializing in solutions for design control, risk management and process validation for medical device companies.

 Andrew has championed several lean and Six Sigma programs reducing cycle time, improving process capability, reliability, order delivery and customer satisfaction. He is an instructor with the Association for the Advancement of Medical Instrumentation (AAMI) and was recently the subject matter expert for revisions to their process validation course. He is a graduate of Northwestern University where he obtained a master's degree from theSchool of Industrial Engineering and University of Arizona with a certificate in Systems Engineering.

Walt Murray is MasterControl's director of quality and compliance services. He is a specialist in the quality and regulatory professions with more than 25years' experience to his credit, working with nationally-recognized organizations including Aventis-Pasteur, Merck, Pfizer, Stryker, USANA, Del MonteFoods and the American Red Cross National Labs. He is certified in quality systems auditing, problem solving, and process control using Six Sigma

 principles that support lean enterprise, including kaizen improvement and advanced planning principles. His extensive audit experience covers several industries and he's successfully brought several medical device companies to full registration under the ISO process model standard. Murray has alsoworked extensively in risk and supplier management.

 A graduate of the University of Richmond, Murray is a member of the Society for Manufacturing Engineers (SME); Regulatory Affairs Professions(RAPS); the American Society for Quality (ASQ); and the Intermountain Biomedical Association (IBA).

Related Links

Bill of Materials (BOM) Software SystemsDocument Control Software Systems

 Audit Management Software SystemsTraining Software SystemsCorrective Actions - CAPA Software SystemsChange Control Software Systems

Nonconformance Management Software SystemsFood Safety Software SystemsCustomer Complaint Management Software SystemsQuality Management Software SystemsRisk Management Software Systems