Critical steps when validating an assay or analytical method for cell and gene therapy products
Dr Kiren Baines, Analytics Lead at eXmoor Pharma, details some of the critical steps when validating an assay or analytical method, including key considerations for developing an experimental plan aligned with ICH guidelines.
When developing a process workflow for your clinical or commercial product, there are several variables that must be considered. Many early‑stage gene therapy companies use techniques common in academic labs such as adherent cultures, which use foetal bovine serum (FBS), freeze-thaw lysis and ultracentrifugation. We call this the starting point, or “as-is” process.
The aim when developing a process is to improve manufacturing scale, robustness and cost, while achieving the necessary levels of quality and safety that are vital for therapeutic products.
The “as-is” process often fails to meet these requirements for a clinical or commercial product, which prompts the development of a more suitable “to-be” process which can align with these objectives. To meet the quality, safety and cost considerations, this often involves developing suspension processes in chemically defined medium, chemical lysis, capture and polish chromatography steps, and diafiltration to formulate the product.
The overall process development workflow consists of several unit operations. Each unit operation requires the completion of several studies from early- to late-stage development. For example, the types of studies that may be required when investigating expansion include cell line evaluation, media screening and cell banking. For purification, studies might include resin screening, capture and polish optimisation, and UFDF optimisation. The optimisation of the process development workflow generates many samples, each of which must be analysed.
There is a range of analytical platforms available for characterising your bioprocess workflow. Due to the large volume of analytical samples, it is important to consider throughput to reduce the risk of creating bottlenecks.
When determining which analytical platform to use, it is important to consider whether the platform is fit for purpose. In many cases, it is essential to perform analytics using a validated assay, which can require development of an assay to achieve validated status.
Achieving high throughput: the pros and cons
The analytical assays that require careful consideration are those relating to maintaining accuracy, precision and a high level of throughput. However, these parameters must be achieved with a fine balance. Sometimes accuracy and precision can be compromised as you attempt to increase your throughput, and vice versa. Ideal industry technologies are those that maintain each of these parameters to a high degree. Concurrently, automation is also a vital tool for industry. In addition to the number of samples and your method of processing the samples, your assay itself and the levels of validation and qualification required is of equal importance.
How do you validate an assay?
Realistically, assay validation is a constant process that starts with a description of method purpose, development of the assay, and definition of its performance characteristics. It then concludes with documentation of the methodology and validation results.
Developing a validated assay in line with ICH guidelines
When validating an assay or analytical method, validity must be established to meet the requirements of regulatory authorities. The ICH guidelines provide a detailed breakdown of the types of experiments you should carry out as well as the priorities you should consider when performing these experiments.
A summary of the types of experiments vital to satisfying regulatory requirements are as follows.
Linearity and MRD
It is imperative to recognise the difficulty in developing a process with only one analytical method; you need several orthogonal analytical methods to understand the key aspects of your process”
When developing your experimental design, it is important to demonstrate linearity. Linearity assesses the ability to obtain a concentration of your specific analyte across a range, such as a series of dilutions. This kind of experiment would provide you with valuable information on the minimum required dilution (MRD). To truly understand the extent to which good linearity is achieved, a plot should be drawn illustrating the dilution factor and concentration of your analyte. The relationship of these two variables should be statically analysed; for example, by calculating the R2 value, correlation coefficient, y-intercept, and slope of the regression line.
Range and accuracy
Often with linearity, the range of an assay is also determined. This includes data on the lower and upper limits of detection and quantification. In addition, by determining the range of an assay, data on accuracy and precision is also collected. There are several ways to determine the accuracy, ie, investigating the authenticity of a value. This can be achieved by comparing the value obtained to other analytical platforms. In addition, accuracy can be established by determining the precision, specificity and linearity of an assay or through spiking studies.
Precision and robustness
Where automation can provide an increase in accuracy and precision it should be employed”
It is fundamental to demonstrate the precision of an assay. Precision looks at the reproducibility of measurements; for example, capsid concentration across a number of repeats (repeatability). Precision is often statistically depicted using coefficient of variation; the lower this value, the greater the precision of an assay. In addition to reproducibility and repeatability, intermediate precision must often also be determined.
This assesses the effect of random events, such as running experiments on different days, on the precision of the assay. These sorts of experiments are also carried out when looking at the robustness of an assay. Often variations occur in the laboratory such as temperature fluctuation and different operators performing assays on different days; thus, it is important to prove these random variables have no effect on the assay and data.
In addition to these investigations, when validating an assay you must be able to track the influence of a specific change and plan your variables in advance. For example, when undertaking a robustness study into operator variability, that can be the only variable – you cannot include alternative matrices. With a plan in place, the only other variable you can directly affect is time.
Contingency and the number of replicates
The ICH guidelines state precisely how many replicates you need to be satisfied that you are getting a result that is consistent and, subsequently, how many times you must achieve that same result with different operators. For example, after running a technique five times, with the same results, is that enough to ensure accuracy and precision?
Automation and the rise of digitalisation
As the industry gravitates towards automation, the consideration of validating an assay will become a greater consideration. Where automation can provide an increase in accuracy and precision it should be employed. In addition to automation, digitalisation has experienced a rise in demand. Over the next few years, the influence of automation and digitalisation on analytical assays and assay validation will likely have a big impact.
About the author
Dr Kiren Baines BSc, MSc, PhD has several years’ experience working in a laboratory environment, in academic and industry settings. Kiren has now taken a lead role in establishing and developing eXmoor’s analytical capabilities. This includes playing a role in the design of the analytical lab space in their facility. She continues to be motivated by innovative science and new ways to improve analytics within the sector. Overall, her main motivations are to continue work that helps improve the lives of patients with innovative therapeutics.