Publications by authors named "Stephen L R Ellison"

13 Publications

  • Page 1 of 1

Assessment of measurement precision in single-voxel spectroscopy at 7 T: Toward minimal detectable changes of metabolite concentrations in the human brain in vivo.

Magn Reson Med 2022 03 16;87(3):1119-1135. Epub 2021 Nov 16.

Physikalisch-Technische Bundesanstalt, Braunschweig und Berlin, Germany.

Purpose: To introduce a study design and statistical analysis framework to assess the repeatability, reproducibility, and minimal detectable changes (MDCs) of metabolite concentrations determined by in vivo MRS.

Methods: An unbalanced nested study design was chosen to acquire in vivo MRS data within different repeatability and reproducibility scenarios. A spin-echo, full-intensity acquired localized (SPECIAL) sequence was employed at 7 T utlizing three different inversion pulses: a hyperbolic secant (HS), a gradient offset independent adiabaticity (GOIA), and a wideband, uniform rate, smooth truncation (WURST) pulse. Metabolite concentrations, Cramér-Rao lower bounds (CRLBs) and coefficients of variation (CVs) were calculated. Both Bland-Altman analysis and a restricted maximum-likelihood estimation (REML) analysis were performed to estimate the different variance contributions of the repeatability and reproducibility of the measured concentration. A Bland-Altmann analysis of the spectral shape was performed to assess the variance of the spectral shape, independent of quantification model influences.

Results: For the used setup, minimal detectable changes of brain metabolite concentrations were found to be between 0.40 µmol/g and 2.23 µmol/g. CRLBs account for only 16 % to 74 % of the total variance of the metabolite concentrations. The application of gradient-modulated inversion pulses in SPECIAL led to slightly improved repeatability, but overall reproducibility appeared to be limited by differences in positioning, calibration, and other day-to-day variations throughout different sessions.

Conclusion: A framework is introduced to estimate the precision of metabolite concentrations obtained by MRS in vivo, and the minimal detectable changes for 13 metabolite concentrations measured at 7 T using SPECIAL are obtained.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/mrm.29034DOI Listing
March 2022

Extending digital PCR analysis by modelling quantification cycle data.

BMC Bioinformatics 2016 Oct 12;17(1):421. Epub 2016 Oct 12.

LGC, Queens Road, Teddington, Middlesex, TW11 0LY, UK.

Background: Digital PCR (dPCR) is a technique for estimating the concentration of a target nucleic acid by loading a sample into a large number of partitions, amplifying the target and using a fluorescent marker to identify which partitions contain the target. The standard analysis uses only the proportion of partitions containing target to estimate the concentration and depends on the assumption that the initial distribution of molecules in partitions is Poisson. In this paper we describe a way to extend such analysis using the quantification cycle (C) data that may also be available, but rather than assuming the Poisson distribution the more general Conway-Maxwell-Poisson distribution is used instead.

Results: A software package for the open source language R has been created for performing the analysis. This was used to validate the method by analysing C data from dPCR experiments involving 3 types of DNA (attenuated, virulent and plasmid) at 3 concentrations. Results indicate some deviation from the Poisson distribution, which is strongest for the virulent DNA sample. Theoretical calculations indicate that the deviation from the Poisson distribution results in a bias of around 5 % for the analysed data if the standard analysis is used, but that it could be larger for higher concentrations. Compared to the estimates of subsequent efficiency, the estimates of 1st cycle efficiency are much lower for the virulent DNA, moderately lower for the attenuated DNA and close for the plasmid DNA. Further method validation using simulated data gave results closer to the true values and with lower standard deviations than the standard method, for concentrations up to approximately 2.5 copies/partition.

Conclusions: The C-based method is effective at estimating DNA concentration and is not seriously affected by data issues such as outliers and moderately non-linear trends. The data analysis suggests that the Poisson assumption of the standard approach does lead to a bias that is fairly small, though more research is needed. Estimates of the 1st cycle efficiency being lower than estimates of the subsequent efficiency may indicate samples that are mixtures of single-stranded and double-stranded DNA. The model can reduce or eliminate the resulting bias.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1186/s12859-016-1275-3DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5062887PMC
October 2016

An international comparability study on quantification of mRNA gene expression ratios: CCQM-P103.1.

Biomol Detect Quantif 2016 Jun 6;8:15-28. Epub 2016 Jun 6.

National Institute of Metrology (NIM), Beijing, PR China.

Measurement of RNA can be used to study and monitor a range of infectious and non-communicable diseases, with profiling of multiple gene expression mRNA transcripts being increasingly applied to cancer stratification and prognosis. An international comparison study (Consultative Committee for Amount of Substance (CCQM)-P103.1) was performed in order to evaluate the comparability of measurements of RNA copy number ratio for multiple gene targets between two samples. Six exogenous synthetic targets comprising of External RNA Control Consortium (ERCC) standards were measured alongside transcripts for three endogenous gene targets present in the background of human cell line RNA. The study was carried out under the auspices of the Nucleic Acids (formerly Bioanalysis) Working Group of the CCQM. It was coordinated by LGC (United Kingdom) with the support of National Institute of Standards and Technology (USA) and results were submitted from thirteen National Metrology Institutes and Designated Institutes. The majority of laboratories performed RNA measurements using RT-qPCR, with datasets also being submitted by two laboratories based on reverse transcription digital polymerase chain reaction and one laboratory using a next-generation sequencing method. In RT-qPCR analysis, the RNA copy number ratios between the two samples were quantified using either a standard curve or a relative quantification approach. In general, good agreement was observed between the reported results of ERCC RNA copy number ratio measurements. Measurements of the RNA copy number ratios for endogenous genes between the two samples were also consistent between the majority of laboratories. Some differences in the reported values and confidence intervals ('measurement uncertainties') were noted which may be attributable to choice of measurement method or quantification approach. This highlights the need for standardised practices for the calculation of fold change ratios and uncertainties in the area of gene expression profiling.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.bdq.2016.05.003DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4906133PMC
June 2016

Monte Carlo simulation of expert judgments on human errors in chemical analysis--a case study of ICP-MS.

Talanta 2014 Dec 19;130:462-9. Epub 2014 Jul 19.

Laboratory of Government Chemist Ltd (LGC), Queens Road, Teddington TW11 0LY, Middlesex, UK.

Monte Carlo simulation of expert judgments on human errors in a chemical analysis was used for determination of distributions of the error quantification scores (scores of likelihood and severity, and scores of effectiveness of a laboratory quality system in prevention of the errors). The simulation was based on modeling of an expert behavior: confident, reasonably doubting and irresolute expert judgments were taken into account by means of different probability mass functions (pmfs). As a case study, 36 scenarios of human errors which may occur in elemental analysis of geological samples by ICP-MS were examined. Characteristics of the score distributions for three pmfs of an expert behavior were compared. Variability of the scores, as standard deviation of the simulated score values from the distribution mean, was used for assessment of the score robustness. A range of the score values, calculated directly from elicited data and simulated by a Monte Carlo method for different pmfs, was also discussed from the robustness point of view. It was shown that robustness of the scores, obtained in the case study, can be assessed as satisfactory for the quality risk management and improvement of a laboratory quality system against human errors.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.talanta.2014.07.036DOI Listing
December 2014

The interlaboratory performance of microbiological methods for food analysis.

J AOAC Int 2012 Sep-Oct;95(5):1433-9

LGC Ltd, Queens Rd, Teddington, Middlesex, TW11 0LY, United Kingdom.

Repeatability and reproducibility data for microbiological methods in food analysis were collated and assessed with a view to identifying useful or important trends. Generalized additive modeling for location, shape, and scale was used to model the distribution of variances. It was found that mean reproducibility for log10(CFU) data is largely independent of concentration, while repeatability SD of log10(CFU) data shows a strongly significant decrease in repeatability SD with increasing enumeration. The model for reproducibility SD gave a mean of 0.44, with an upper 95th percentile of approximately 0.76. Repeatability variance could be described reasonably well by a simple dichotomous model; at enumerations below 10(5)/g, the model for repeatability SD gave a mean of approximately 0.35 and upper 95th percentile of 0.63. Above 10(5)/g, the model gave a mean of 0.2 and upper 95th percentile of 0.36. A Horwitz-like function showed no appreciable advantage in describing the data set and gave apparently worse fit. The relationship between repeatability and reproducibility of log10(CFU) is not constant across the concentration range studied. Both repeatability and reproducibility were found to depend on matrix class and organism.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.5740/jaoacint.11-452DOI Listing
December 2012

A standard additions method reduces inhibitor-induced bias in quantitative real-time PCR.

Anal Bioanal Chem 2011 Dec 15;401(10):3221-7. Epub 2011 Oct 15.

LGC Limited, Queens Road, Teddington, UK.

A method of calibration for real-time quantitative polymerase chain reaction (qPCR) experiments based on the method of standard additions combined with non-linear curve fitting is described. The method is tested by comparing the results of a traditionally calibrated qPCR experiment with the standard additions experiment in the presence of 2 mM EDTA, a known inhibitor chosen to provide an unambiguous test of the principle by inducing an approximately twofold bias in apparent copy number calculated using traditional calibration. The standard additions method is shown to substantially reduce inhibitor-induced bias in quantitative real-time qPCR.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s00216-011-5460-yDOI Listing
December 2011

Standard additions: myth and reality.

Analyst 2008 Aug 29;133(8):992-7. Epub 2008 Apr 29.

LGC Ltd, Queens Road, Teddington, Middlesex, UK TW11 0LY.

Standard additions is a calibration technique devised to eliminate rotational matrix effects in analytical measurement. Although the technique is presented in almost every textbook of analytical chemistry, its behaviour in practice is not well documented and is prone to attract misleading accounts. The most important limitation is that the method cannot deal with translational matrix effects, which need to be handled separately. In addition, because the method involves extrapolation from known data, the method is often regarded as less precise than external calibration (interpolation) techniques. Here, using a generalised model of an analytical system, we look at the behaviour of the method of standard additions under a range of conditions, and find that, if executed optimally, there is no noteworthy loss of precision.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1039/b717660kDOI Listing
August 2008

Treatment of uncorrected measurement bias in uncertainty estimation for chemical measurements.

Anal Bioanal Chem 2008 Jan 17;390(1):201-13. Epub 2007 Nov 17.

SP Technical Research Institute of Sweden, Box 857, 501 15, Borås, Sweden.

Consistent treatment of measurement bias, including the question of whether or not to correct for bias, is essential for the comparability of measurement results. The case for correcting for bias is discussed, and it is shown that instances in which bias is known or suspected, but in which a specific correction cannot be justified, are comparatively common. The ISO Guide to the Expression of Uncertainty in Measurement does not provide well for this situation. It is concluded that there is a need for guidance on handling cases of uncorrected bias. Several different published approaches to the treatment of uncorrected bias and its uncertainty are critically reviewed with regard to coverage probability and simplicity of execution. On the basis of current studies, and taking into account testing laboratory needs for a simple and consistent approach with a symmetric uncertainty interval, we conclude that for most cases with large degrees of freedom, linear addition of a bias term adjusted for exact coverage ("U(e)") as described by Synek is to be preferred. This approach does, however, become more complex if degrees of freedom are low. For modest bias and low degrees of freedom, summation of bias, bias uncertainty and observed value uncertainty in quadrature ("RSSu") provides a similar interval and is simpler to adapt to reduced degrees of freedom, at the cost of a more restricted range of application if accurate coverage is desired.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s00216-007-1693-1DOI Listing
January 2008

Routes to improving the reliability of low level DNA analysis using real-time PCR.

BMC Biotechnol 2006 Jul 6;6:33. Epub 2006 Jul 6.

Analytical Technology, LGC Limited, Teddington, TW11 0LY, UK.

Background: Accurate quantification of DNA using quantitative real-time PCR at low levels is increasingly important for clinical, environmental and forensic applications. At low concentration levels (here referring to under 100 target copies) DNA quantification is sensitive to losses during preparation, and suffers from appreciable valid non-detection rates for sampling reasons. This paper reports studies on a real-time quantitative PCR assay targeting a region of the human SRY gene over a concentration range of 0.5 to 1000 target copies. The effects of different sample preparation and calibration methods on quantitative accuracy were investigated.

Results: At very low target concentrations of 0.5-10 genome equivalents (g.e.) eliminating any replicates within each DNA standard concentration with no measurable signal (non-detects) compromised calibration. Improved calibration could be achieved by eliminating all calibration replicates for any calibration standard concentration with non-detects ('elimination by sample'). Test samples also showed positive bias if non-detects were removed prior to averaging; less biased results were obtained by converting to concentration, including non-detects as zero concentration, and averaging all values. Tube plastic proved to have a strongly significant effect on DNA quantitation at low levels (p = 1.8 x 10(-4)). At low concentrations (under 10 g.e.), results for assays prepared in standard plastic were reduced by about 50% compared to the low-retention plastic. Preparation solution (carrier DNA or stabiliser) was not found to have a significant effect in this study.Detection probabilities were calculated using logistic regression. Logistic regression over large concentration ranges proved sensitive to non-detected replicate reactions due to amplification failure at high concentrations; the effect could be reduced by regression against log (concentration) or, better, by eliminating invalid responses.

Conclusion: Use of low-retention plastic tubes is advised for quantification of DNA solutions at levels below 100 g.e. For low-level calibration using linear least squares, it is better to eliminate the entire replicate group for any standard that shows non-detects reasonably attributable to sampling effects than to either eliminate non-detects or to assign arbitrary high Ct values. In calculating concentrations for low-level test samples with non-detects, concentrations should be calculated for each replicate, zero concentration assigned to non-detects, and all resulting concentration values averaged. Logistic regression is a useful method of estimating detection probability at low DNA concentrations.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1186/1472-6750-6-33DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1559608PMC
July 2006

Reporting measurement uncertainty and coverage intervals near natural limits.

Analyst 2006 Jun 11;131(6):710-7. Epub 2006 May 11.

LGC Limited, Queens Road, Teddington, Middlesex, UK TW11 0LY.

Different methods of treating data which lie close to a natural limit in a feasible range, such as zero or 100% mass or mole fraction, are discussed and recommendations made concerning the most appropriate. The methods considered include discarding observations beyond the limit, shifting observations to the limit, truncation of a classical confidence interval based on Student's t (coupled with shifting the result to the limit if outside the feasible range), truncation and renormalisation of an assumed normal distribution, and the maximum density interval of a Bayesian estimate based on a normal measurement distribution and a uniform prior within the feasible range. Based on consideration of bias and simulation to assess coverage, it is recommended that for most purposes, a confidence interval near a natural limit should be constructed by first calculating the usual confidence interval based on Student's t, then truncating the out-of-range portion to leave an asymmetric interval and adjusting the reported value to within the resulting interval if required. It is suggested that the original standard uncertainty is retained for uncertainty propagation purposes.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1039/b518084hDOI Listing
June 2006

Scoring in genetically modified organism proficiency tests based on log-transformed results.

J AOAC Int 2006 Jan-Feb;89(1):232-9

University of London, Birkbeck College, School of Biological and Chemical Sciences, Malet St, London, United Kingdom.

The study considers data from 2 UK-based proficiency schemes and includes data from a total of 29 rounds and 43 test materials over a period of 3 years. The results from the 2 schemes are similar and reinforce each other. The amplification process used in quantitative polymerase chain reaction determinations predicts a mixture of normal, binomial, and lognormal distributions dominated by the latter 2. As predicted, the study results consistently follow a positively skewed distribution. Log-transformation prior to calculating z-scores is effective in establishing near-symmetric distributions that are sufficiently close to normal to justify interpretation on the basis of the normal distribution.
View Article and Find Full Text PDF

Download full-text PDF

Source
May 2006

A decision theory approach to fitness for purpose in analytical measurement.

Analyst 2002 Jun;127(6):818-24

Department of Statistical Science, University College London, UK.

The choice of an analytical procedure and the determination of an appropriate sampling strategy are here treated as a decision theory problem in which sampling and analytical costs are balanced against possible end-user losses due to measurement error. Measurement error is taken here to include both sampling and analytical variances, but systematic errors are not considered. The theory is developed in detail for the case exemplified by a simple accept or reject decision following an analytical measurement on a batch of material, and useful approximate formulae are given for this case. Two worked examples are given, one involving a batch production process and the other a land reclamation site.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1039/b111465dDOI Listing
June 2002
-->