Return to search

Evaluation of error and reproducibility of qPCR for absolute quantification of DNA

Absolute quantitative PCR (qPCR) is a method that determines the
concentration of DNA in a sample. Accurate, and reproducible quantification is required during forensic DNA processing since the results determine the volume of sample used during STR genotyping. If too little DNA is utilized allelic dropout can occur; if too much DNA is used an increase in the number of artifacts can result. In either case, sub-optimal DNA input-masses can lead to the misinterpretation of the evidentiary profile, by increasing the probability of drop in and/or drop out.
Generally, the qPCR method used during forensic DNA processing employs a set of standards, which are run with the questioned samples and used to generate a standard curve. These data are then used to establish a linear equation that is subsequently utilized to estimate the concentration of DNA in the unknown sample. However, standard curves have been shown to be prone to systematic and random error effects that impact the accuracy of the concentration estimate.
This study examines two alternative methods to determine the DNA concentration for unknown samples, and compares them to the currently accepted protocol of running new dilutions/standards with every assay. The two alternative methods are: 1) using a validated standard curve, and 2) using linear regression of efficiency.
To examine the feasibility of using these two methods for forensic purposes, two samples were quantified, using qPCR, in quadruplicate over the course of three years and concentrations were calculated using all three methods. Effects that time, kit lot, and instrument calibration had on the concentrations was examined for both total human and Y-DNA. Specifically, methods were compared by examining variances in concentration over the three- year period, and contrasting these results with the variances obtained within runs. The method which resulted in the smallest changes in concentration over time was regarded as the most stable.
Results show that of the three methods, the use of a validated curve resulted in less variation of DNA concentration between multiple runs. Further, the factor that had the largest impact on concentration variance was the calibration of the instrument. Based on these results, recommendations are provided.

Identiferoai:union.ndltd.org:bu.edu/oai:open.bu.edu:2144/13309
Date24 September 2015
CreatorsCicero, Michael Carmen
Source SetsBoston University
Languageen_US
Detected LanguageEnglish
TypeThesis/Dissertation

Page generated in 0.0019 seconds