Presenters:

Dr. Jonathan Frampton, Associate Director - Diagnostics, Horizon Discovery

Dr. Vicky Spivey, Senior Scientist - Diagnostics, Horizon Discovery

Despite technical advances, assessing the accuracy of pre-PCR steps, which include DNA extraction from formalin-fixed paraffin-embedded (FFPE) tissues, DNA quantitation and DNA quality control, remain a key challenge in external quality assurance.

In the webinar we will discuss the latest results from recent studies and look at ways that the accuracy of pre-PCR workflows can be improved.

JF: Now what I’d like to do is we are going to talk for maybe 25 to 30 minutes around the analytical, about how the pre PCR steps can introduce variability into your assay and what a good way for you to identify those and then how to mitigate for it in the future. And it always comes back to one question I always like to ask people when I’m talking to them is what is the impact of assay failure in your laboratory and how do you go about monitoring for it? Because that’s the key question to always ask because if there’s any variability creeping into the assay then potential results are going to be erroneous.

Now it’s a good idea to break down the molecular diagnostic assay into two parts. You have the pre analytical step and then you have the analytical step. On the pre analytical side, you’ve got your biopsy, you’re cutting out of a patient, you’re putting it into a bucket of formalin then you’re going through FFPE processing. That will then be transferred into a moleculerology? lab where they’re going to do DNA extraction. You may then have to store that sample at any time from a few days to a few weeks to a few months. You then try to take it onto the analytical side of things. You’ll quantify the DNA predominantly by a nanodrop machine spectrophotometer or using a fluid metric approach. And this side of the process will govern the quality and quantity of DNA that you get out of your sample.

On the flip side, and this is the analytical side, and the analytical side is strongly determined by how good the pre analytical steps were. You have your DNA sample – if you’re going down a next generation sequencing route you may have to go down some kind of sample preparation route. If you’re doing qPCR then you just take your DNA and do qPCR on that. If you’re doing analysis then this will lead onto an actionable position. Now in fact the starting point, the DNA sample if you’ve over quantified or under quantified the amount of DNA in your sample that can have detrimental effects and impacts on the actionable result and decision at the end of this process.

So what I want to do is just very briefly, and we’ve got another whole different webinar on what we do and the types of reference standards that we make, is the data that our guest speaker Vicky Spivey will be presenting is using a number of our FFPE reference standards. I thought it was important to give you a quick snapshot of what our FFPE reference standards are. So what we’ve done is we’ve taken cell lines that we’ve gene editing ourselves, they’ve gone through extensive characterisation. We’ve taken cell lines in mutant and wild type and generated a FFPE block that defines allele frequency. We then extract the DNA ourselves and characterise the sections ourselves using digital PCR or Sanger sequencing, rtPCR or alternative SNP. Once we’ve done that we then build a essentially a product around each batch, and this is a snapshot here where on the left hand side if the screen you’ve got 3 different FFPE blocks that we’ve made – this is B-Raf V600K one was a 50% allelic frequency for the mutation and with digital PCR you have sections at the start, middle and end of the block and you can see the consistency of the allelic frequency throughout the block. The same if true when you have the 5% and also when we’ve generated a 1% and it comes in at just before 1%, 0.8. on the right hand side what it shows is how constant and reproducible our blocks are – this shows 8 different blocks and we took multiple sections from each block and did DNA extractions and what you can see the DNA extractions per block are very consistent.

So it then comes back to this question I asked at the start – what is the impact of assay failure in your laboratory and how do you monitor for it? Obviously we are pushing to adopt reference standards and controls. They are an ideal way of doing this. Here we have been working with partners, particularly proficiency teams with proficiency testing is each EQA partners have adopted our standards and have actually been able to identify genotyping errors with our participants. Now while these types of schemes monitor the whole pre analytical processes, what you can then start to dive into the detail and it was difficult to show with the webinar – but if you take some of these peaks here EGFR and G719S but of the participants that looked for this mutation there was an error rate of over 35%and if you then looked at what the error rate was there was a large proportion of false negatives and if you look at the false negatives there was reporting from the sample that they had over representing or over quantified the amount of DNA they had actually extracted from the sample. And that over quantification lead to less DNA being put into the assay and less DNA was leading to a false negative result. And that’s where we really feel we can start to help to try and identify some of these problems and mitigate them for the future.

Now that’s my introduction over. I’d therefore like to hand over to Dr Vicky who’s our guest speaker for this webinar who will take you through a number of datasets where our reference e standards have been used both internally and externally. If there are any questions then please do type away in the little chat question box and we’ll answer them at the end.

VS: thank you Jonathan, I’d like to start with an introduction to the main pre-analytical challenges. On the left hand side you will see the typical workflow for FFPE sample processing and the first challenge starts right at the beginning of this process with differences in sample collection and handling. Different labs follow different protocols for sample fixation and FFPE embedding and these can both influence the downstream processing of the samples.

The second challenge relates to the efficacy of DNA extraction – often with tissue samples there is only a small amount of material available, and in addition we know that labs use various DNA extraction methods, some of which are automated, some are manual and all can result in different DNA yields obtained. Finally, the third challenge is in the accuracy of DNA quantification with different methods sometimes yielding quite different results.

Over the next couple of slides I’ll discuss these challenges in more detail and show you some of our external and internal study data.

This slide is internally generated data and it demonstrates the variation between five of the commonly used DNA extraction methods. The graph shows the different extraction methods employed on the X axis, with a sample number of either 6 of 12 for each method. Each FFPE section extracted was the same Horizon FFPE Reference Standard which Jonathan introduced you to earlier and they were all quantified using the QuantliFluoro metric assay.

The Y axis shows the amount of DNA recovered as a % of the total theoretical yield. And what you see in this particular dataset the Promega Maxwell platform gives the greatest yield from the sections and this also shows a high degree of reproducibility across all 12 sections. The take home message here is that this data highlights that the same samples extracted on different platforms can give quite different yields.

So moving onto the next slide, the data presented on this slide was externally generated and highlights the variation this time within different FFPE extraction methods. So just to give you a bit of background, thirteen molecular pathology labs were recruited and participated in this study. They extracted a total of 104 FFPE curls using the five different extraction methods shown in the graph. Again, the FFPE curls extracted were all Horizon FFPE Reference Standards and this time the DNA extractions were quantified using the Qubit which is another fluorometric assay. The N number in this data refers to the number of labs employing that particular method. And as you can see the results demonstrate that Qiagen EZ1 automated platform had the lowest yield variance, with a CV of 52% and the Qiagen QIAamp had the highest CV. So the take home message from this slide is that this highlights that different extraction methods have different levels of variability across multiple samples.

So on the next slide, this dataset was also externally generated from the same study as the previous slide. This figure shows the average nanodrop to QuBit ratio in each one of the thirteen laboratories that participated as well as, on the very far right hand side the average and median ratio of the entire cohort. And what you can see is the correlation between the nanodrop and QuBit measurements for all the samples was poor with an R2 of 0.48 and a P value of less than 0.0001. As the graph shows the median nanodrop readings were 5.1 fold higher that the Qubit measurements for the same samples. Now obviously if they measured the same the Nanodrop: Qubit ratio would be 1. What’s important to take home from this slide is that for every participating lab the nanodrop over quantifies the DNA concentration compared to the Qubit reading.

Analyzing the cause of a failed assay is a particular challenge for labs that aren’t used to handling FFPE samples, or for labs that are using quantification methodologies that tend to overestimate the amount of DNA in a sample when they’re measuring a sample with quite a low concentration, of less than 20 ng/µL.

As a quick comparison of the different methodologies spectrophotometry is very accurate for samples that are above 10 or 20ng/µL and an additional advantage of this is it can be used to confirm contamination with protein or RNA. In comparison fluorometry based methods are suitable for DNA concentrations far below 10ng/µL but on the other hand can be inaccurate for very highly concentrated samples. However, fluorometry methods can also be used for both high molecular weight and fragmented DNA samples.

So I’ll say in summary the different extraction methods can result in different DNA yields from the same starting material. Secondly we know that the nanodrop is very efficient at quantifying DNA at high concentrations but we’ve also shown that at low concentrations, around 10 ng/µL, that spectrophotometry methods can overestimate DNA concentrations compared to fluorometry quantification methods. And I’ll say that all these factors have important implications on the diagnostic test, particular as Jonathan already gave an example of with false negatives.

JF: So what I want to bring round a bit more data based, externally and internally of using our reference standards. This slide here is from the paper that Vicky referenced earlier, and what I want to highlight here is that looking at the total DNA that is recovered from different samples by participants. And so seven of the samples do come from Horizon and are validated reference standards, the ones in green, and there’s the one sample that is a clinical sample. And whilst the lab that distributed the clinical sample did attempt to keep the H samples as consistent as possible between vials, it’s very clear that when you try to extract DNA from clinical samples there’s great variability in DNA recovery. And that could be attributed to the sample prep; it could be attributed to the DNA quantification. Wherever the variability creeps in its along with other factors it’s a critical property of clinical samples but suggests they shouldn’t be used as external controls for truly understanding the pre analytical as well as the analytical side of your assay. And in contrast I think this is where validated reference standards can prove to be very useful as external controls as there’s much tighter concordance between labs with each sample, shown by this dataset.

And what we wanted to do based on this was to start to, this is internal data, from the data out there we've always asked the question, what is the impact of formalin fixation on assay performance. And it can impact both the re analytical side and the analytical side and we did actually have a webinar that is available on our website last month about how formalin fixation can impact the analytical side. Now what’s important on the pre analytical, so on the left hand side of this graph the three datasets is where we’ve looked at the DNA concentration from three batches of samples where we’ve treated the cells with mild formalin fixation. We then take the same number of cells and undergone a severe formalin fixation step. And what you can see very clearly is while we know that in the theoretical DNA yield and actual DNA concentration from both should be consistent between mild versus severe with the nanodrop, which is the grey bars versus the QuBit, and well overestimated the actual DNA in the tube. And the knock on effect as Vicky mentioned, you would then under load the amount of material into your analytical step so there is a potential for false negatives in your result.

Now what we’ve tried to develop to try to push this further and support the field, is to looking at how you can test both the robustness and sensitivity of your assay or workflow. And what I’ve got here is a quadrant where we can start at the top right. So we do have the availability of DNA reference standards either as a single SNP, say KRAS G13D or as a multiplex SNP which is very useful if you’re running next generation sequencing assays. That DNA in the top right is direct from the cell line - so it’s very clean, very easy to use. And it would help to tell you the true detection limit of your assay. We then moved onto the top side of the quadrant – this is where we developed formalin compromised reference standards provided as DNA so within the tube you would get DNA and we currently have a early released product available. One is with mild formalin fixation, which is the HD-C750 and one is with the severe formalin fixation HDC751 and with this you can really start to test whether the formalin fixation actually has an impact on firstly the DNA quantification and then also the analytical side of your workflow, and whether the formalin fixation could be attributed to false negatives in your workflow and in your assay. and then the final quadrant at the bottom left is a true process control and this is where we provide exactly the same material as an FFPE section where you can go through a DNA extraction go through the NA quantification and then put it through the analytical side of your workflow to test how sensitive and robust your workflow is. What’s really nice and its inherent in our approach, is every single one of these tubes has exactly the same genotype and so all we’ve done is change the format which we have provided it to you with and so there’s only one variable different in each of the quadrants. Whether it’s clean DNA versus formalin compromised DNA versus FFPE sections.

I’d be very happy to talk more about these in a more one o toe situation with you if you’re interested and we always have our live chats up and running on our website so you can always tap in chat to our scientist and technical scientists to talk more about these particular reference standards. And then it comes back round to the question I asked at the start – what is the impact of assay failure in your laboratory and how do you monitor for it? Now where we’ve come from over the last 4 years as a division within Horizon, we’ve pushed boundaries with the quality assurance teams. We’ve worked with a number of assurance teams, we’ve worked with a number of partners that are developing companion diagnostic kits and what we are driving forwards is that validation and reference standards are ideal as external controls. And then the next question to put out there to you is what extraction and quantification methods are you using? What is the limit of detection of your workflow? And is the impact of formalin treatment an interesting variable for you to look at? I think that if you are asking yourselves any of those questions I’d like to think we could support you moving forward in your laboratories.