Rapid methods update: revisions to a United States Pharmacopeia chapter
Posted: 3 September 2015 |
From 2010 to 2013, European Pharmaceutical Review published a very successful series on rapid microbiological methods (RMM) that included hot topics such as the European Medicines Agency’s and US Food and Drug Administration’s expectations, implementation strategies, scientific principles behind the technologies and validation.
The final article of the 2012 series introduced the United States Pharmacopeia’s (USP’s) plan to revise informational chapter <1223>, Validation of Alternative Microbiological Methods. On June 1, 2015, a substantially modified chapter <1223> was published in the second supplement to USP38/NF33 with an official date of 1st December 2015. Because the original USP chapter was published almost 10 years ago, this article will review the most notable changes and compare them with what is recommended in the Parenteral Drug Association (PDA) Technical Report Number 33 and the proposed revision to European Pharmacopoeia (Ph. Eur.) chapter 5.1.6.
A reason for change
In 2012, the scientific community learned of the USP Microbiology Expert Committee’s desire to significantly revise the 2006 version of USP <1223>. The committee envisioned a chapter that would offer greater flexibility in accommodating future alternative microbiological methods and be less prescriptive for a wide range of stakeholders, especially those that require novel technologies for the rapid release of specialised products (e.g., cellular therapy and compounded medicines). For these reasons, the committee developed an improved chapter with enhanced guidance on equipment qualification, analytical method validation and suitability, user requirements and better explained how to demonstrate equivalence or non-inferiority to the compendial methods.
At about the same time the USP started its revision process, the European Directorate for the Quality of Medicines initiated a program to enhance Ph. Eur. chapter 5.1.6, Alternative Methods for Control of Microbiological Quality. Earlier this year, a draft revision was published in Pharmeuropa for public review and comment3. The chapter was essentially completely rewritten to take into account new technological developments, the impact of process analytical technology (PAT) and real-life examples of how companies have validated alternative and rapid methods since the publication of the original chapter in 2006. Additionally, the proposed validation sections have been restructured to provide details on what are called primary validation and validation for the intended use associated with qualitative, quantitative and identification methods. And for those who recall the chapter’s appendix of an example protocol based on bioluminescence, this has been removed in favour of three new case studies employing an adenosine triphosphate-based rapid sterility test using membrane filtration, a quantitative test for the enumeration of microorganisms via solid phase cytometry (i.e., viability staining followed by laser excitation) and a PCR-based microbial identification technique.
Separately and in 2013, the PDA published its long-awaited revision of Technical Report Number 33 (TR33)4. The new document, Evaluation, Validation and Implementation of Alternative and Rapid Microbiological Methods, provides significantly enhanced guidance on a number of topics, including, but not limited to, risk assessments, user requirements, supplier considerations, implementation and technology transfer strategies, global regulatory expectations, equipment and software qualification, method suitability testing, the demonstration of equivalence, use of statistics and an updated technology overview.
All three guidance documents have been successfully utilised by multinational firms to support their alternative and rapid method validation and implementation approaches. However, recent changes to USP <1223> warrant the need to take a closer look at this revised chapter and understand the similarities and differences that exist with PDA TR33 as well as the proposed modifications to Ph. Eur. 5.1.6.
Instrument qualification and validation
The new USP chapter provides guidance on how to qualify equipment and instrumentation associated with an alternative microbiological method and specifically references another chapter, USP <1058>, Analytical Instrument Qualification, for additional details5. PDA TR33 and Ph. Eur. 5.1.6 provide similar recommendations for instrument qualification.
After the instrumentation has been qualified, USP <1223> recommends using a standardised panel of microorganisms against specific validation criteria in order to validate the analytical technique. At this stage, actual product is not used and nor is there a comparison to a compendial method. Similar procedures are recommended by PDA TR33 and Ph. Eur. 5.1.6; however, there are some differences worth mentioning. For example, only USP <1223> identifies repeatability as a validation criterion, which is actually a subset of precision in all three guidance documents. Additionally, the term ruggedness is not found in Ph. Eur. 5.1.6 but the same concept (as presented in USP <1223> and TR33) is addressed in chapter 5.1.6 as intermediate precision. Limit of detection is also not identified as a validation criterion for quantitative methods in Ph. Eur. 5.1.6.
Tables 1 and 2 compare each of the validation criteria to be assessed for a quantitative and qualitative method, respectively.
A significant difference exists between USP <1223> and the other two documents in terms of equivalence. USP <1223> defines equivalency as “[when] the test results from two procedures are sufficiently close for the intended use of the procedures. Demonstration of equivalence requires a pre-specified measure of how similar the test results need to be.” It can also be understood that the demonstration of equivalence in USP <1223> is conducted in the absence of actual product or test samples. Essentially, a panel of relevant microorganisms is used to compare the alternative method with the compendial method. Conversely, product is separately utilised during method suitability studies, which is defined in USP <1223> as a “[demonstration] of lack of enhancement or inhibition by the product on the signal generated by the method.” A more detailed discussion of USP’s method suitability strategy is presented in a subsequent section within this article.
PDA TR33 and Ph. Eur. 5.1.6 also utilise actual product during method suitability studies; however, the product is also required when demonstrating equivalency. TR33 and chapter 5.1.6 define equivalency as follows:
- TR33: “Equivalence or comparative testing involves the use of actual product and other sample matrices that will be routinely tested using the alternative or rapid method once it is validated and implemented.”
- Chapter 5.1.6: “A direct approach to demonstrating the equivalence of two qualitative methods would be to run them side-by-side and determine the degree to which the method under evaluation leads to the same pass/fail result as the pharmacopoeial method. This parallel testing shall be performed based on a pre-specified period of time or number of samples.”
Considering a practical strategy for demonstrating equivalence, TR33 suggests employing similar procedures and data analyses to those previously utilised for assessing validation criteria with standardised cultures, such as accuracy, precision, limit of quantification, limit of detection, linearity or range. For a qualitative method, the chapter 5.1.6 advocates demonstrating the same pass/fail result as a qualitative compendial method. For a quantitative method, chapter 5.1.6 states that if the result of the alternative method can be expressed as a number of colony-forming units (CFU) per weight or per volume, statistical analysis of the results shall demonstrate that the results of the alternative method are at least equivalent to those of the compendial method. Otherwise, if the result of the alternative method cannot be expressed as a number of CFU, then statistical analysis shall demonstrate that the results of the alternative method are at least equivalent to those of the compendial method.
USP <1223> describes four options that may be used to demonstrate an alternative method is equivalent to a compendial method. These options are based on a 2009 stimuli article published in USP’s Pharmacopeial Forum6. Three options (recognised as performance equivalence, results equivalence and decision equivalence) allow for the direct comparison with a compendial method. Multiple characteristics are compared when using the performance equivalence option and this is the strategy synonymous with the recommendations in PDA TR33 and the proposed Ph. Eur. 5.1.6. Alternatively, a single characteristic is compared when using the results or decision equivalence options. Interestingly, a fourth option (identified as the acceptable procedure) does not require a comparison between an alternative and a compendial method; only a minimum performance or acceptance condition is required. USP provides examples of how to conduct studies based on the results equivalence option (using a quantitative method) and the decision equivalence option (using a qualitative method); however, there is no specific guidance on when each of the four options would be appropriate for use.
USP <1223> teaches that for each new product to be evaluated with a validated alternative method, suitability testing should be performed using the same sample preparation, quantity and number of units appropriate for the product and the required level of assay sensitivity. Furthermore, method suitability testing should demonstrate the alternative signal is not quenched or increased in the presence of the product being evaluated. Essentially, the user is demonstrating the product is compatible in the validated alternative method. Accuracy and precision are required to be evaluated for quantitative methods and the recovery of microorganisms according USP <62>, <71> and <1227> is demonstrated for qualitative methods. And since method suitability is generally the only time actual product is used within USP’s validation approach, this phase would be similar to what is termed equivalence testing in TR33 and chapter 5.1.6.
USP also states that once an alternative method has been shown to be equivalent to a compendial test for a single product, there is no need to repeat the equivalency parameters for every new product (i.e., only method suitability is to be verified for each new product). For the purpose of clarity, this author assumes the ‘equivalency parameters’ USP is referring to are the accuracy, precision and detection studies as mentioned above, considering actual product or test samples are not necessarily identified as being used during equivalence testing. However, if the USP intended for the product to be tested according to one of the four equivalence options, in addition to what is described under method suitability, this should have been unmistakably stipulated in the revised chapter.
In PDA TR33 and the proposed Ph. Eur. 5.1.6, method suitability confirms the compatibility of a product or test sample in the alternative procedure:
TR33: “To demonstrate that the new method is compatible with specific product or sample matrices that will be routinely assayed, each material should be evaluated for the potential to produce interfering or abnormal results, such as false positives (e.g., a positive result when no viable microorganisms are present in the test sample) or false negatives (e.g., a negative result when microorganisms are present in the test sample). This may also include evaluating whether cellular debris, dead microorganisms or mammalian cell cultures have any impact on the ability of the new method and accompanying system to operate as it is intended to.”
Chapter 5.1.6: “The alternative method must be applied according to the specified procedure and with the samples to be analysed under the responsibility of the user. The method must be shown to give comparable results as characterised in the model system used by the supplier. Compatibility of the response with the product prepared as needed by the user, evaluated using pharmacopoeial test strains.”
Therefore, the guidance provided in TR33 and chapter 5.1.6 focuses method suitability on the potential for a product or test sample to generate an incorrect response (i.e., false positives, false negatives or interference). Although no specific criteria are identified by TR33 when conducting method suitability studies, similar strategies as described above for the validation criteria may be employed (e.g., evaluating accuracy and precision for quantitative methods; limit of detection or inclusivity/exclusivity for qualitative methods). For suitability testing, the proposed revision to Ph. Eur. 5.1.6 recommends assessing the detection limit for qualitative methods and accuracy, limit of quantification and linearity for quantitative methods.
It is important to remind the reader that the PDA and Ph. Eur. strategies outlined in the prior paragraph apply to method suitability testing and are not a demonstration of equivalence, even though actual product or test sample is used in both analyses. To reiterate, method suitability evaluates incompatibilities with a test material when examined in an alternative method; equivalence demonstrates a statistically similar, non-inferior or better response between an alternative and a current/compendial method in the presence of a test material. Therefore, method suitability and equivalence are evaluated on their own. In contrast, it appears that USP <1223> combines these two concepts within its version of method suitability.
Based on this comparative review of the new USP <1223>, PDA TR33 and the proposed Ph. Eur. 5.1.6, the reader may be somewhat confused in attempting to understand the similarities and differences between the three documents. To simplify what each is guidance is communicating, Table 3 provides a streamlined summary of what has been examined in this article.
USP <1223> has undergone a significant revision with the intent of offering stakeholders greater flexibility in validating alternative and rapid microbiological methods. Some of the changes are comparable to the strategies presented in PDA TR33 and the proposed Ph. Eur. chapter 5.1.6 while others are appreciably different. Therefore, end-users should thoroughly review each of the validation guidance documents to determine which ones offer the most appropriate options for a successful validation program. The reader is also encouraged to visit http://rapidmicromethods.com for additional guidance on qualification strategies and regulatory expectations during the validation and implementation of these novel technologies.
- Miller, MJ, 2012. Hot Topics in Rapid Methods: Revisions to Validation Guidance and Real-Time Environmental Monitoring. European Pharmaceutical Review. 17(6): 58.
- 2015. <1223> Validation of Alternative Microbiological Methods. United States Pharmacopeial Convention. USP 38/NF33:1439
- 2015. Chapter 5.1.6 Alternative Methods for Control of Microbiological Quality, European Directorate for the Quality of Medicines & HealthCare. 27.1:8
- 2013. Evaluation, Validation and Implementation of Alternative and Rapid Microbiological Methods, Technical Report No. 33 (Revised 2013), Parenteral Drug Association
- 2015. <1058> Analytical Instrument Qualification. United States Pharmacopeial Convention. USP 38/NF33:971
- Hauck WW, DeStefano AJ, Cecil TL, Abernethy DR, Koch WF, Williams RL. 2009. Acceptable, equivalent, or better: approaches for alternatives to official compendial procedures. Stimuli to the Revision Process. Pharmacopeial Forum: 35(3):772
Dr. Michael J. Miller is an internationally recognised microbiologist and subject matter expert in pharmaceutical microbiology and the design, validation and implementation of rapid microbiological methods. He is currently the President of Microbiology Consultants, LLC (http://microbiologyconsultants.com) and owner of http://rapidmicromethods.com, a website dedicated to the advancement of rapid methods.
For more than 25 years, he has held numerous R&D, manufacturing, quality, business development and executive leadership roles at multinational firms such as Johnson & Johnson, Eli Lilly and Company and Bausch & Lomb. In his current role, Dr. Miller consults with multinational companies in providing technical, quality, regulatory and training solutions in support of rapid methods, sterile and non-sterile pharmaceutical manufacturing, contamination control, isolator technology, environmental monitoring, sterilisation and antimicrobial effectiveness.
Dr. Miller has authored more than 100 technical publications and presentations. He currently serves on the editorial and scientific review boards for American Pharmaceutical Review, European Pharmaceutical Review and the PDA Journal of Science and Technology. Dr. Miller holds a PhD in Microbiology and Biochemistry from Georgia State University (GSU), a BA in Anthropology and Sociology from Hobart College, and is currently an adjunct professor at GSU.