**13** Publications

- Page
**1**of**1**

Magn Reson Med 2022 03 16;87(3):1119-1135. Epub 2021 Nov 16.

Physikalisch-Technische Bundesanstalt, Braunschweig und Berlin, Germany.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1002/mrm.29034 | DOI Listing |

March 2022

Accredit Qual Assur 2018 ;23(6)

European Commission Joint Research Centre (JRC), Geel, Belgium.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1007/s00769-018-1349-1 | DOI Listing |

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8201643 | PMC |

January 2018

BMC Bioinformatics 2016 Oct 12;17(1):421. Epub 2016 Oct 12.

LGC, Queens Road, Teddington, Middlesex, TW11 0LY, UK.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1186/s12859-016-1275-3 | DOI Listing |

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5062887 | PMC |

October 2016

Biomol Detect Quantif 2016 Jun 6;8:15-28. Epub 2016 Jun 6.

National Institute of Metrology (NIM), Beijing, PR China.

Measurement of RNA can be used to study and monitor a range of infectious and non-communicable diseases, with profiling of multiple gene expression mRNA transcripts being increasingly applied to cancer stratification and prognosis. An international comparison study (Consultative Committee for Amount of Substance (CCQM)-P103.1) was performed in order to evaluate the comparability of measurements of RNA copy number ratio for multiple gene targets between two samples. Six exogenous synthetic targets comprising of External RNA Control Consortium (ERCC) standards were measured alongside transcripts for three endogenous gene targets present in the background of human cell line RNA. The study was carried out under the auspices of the Nucleic Acids (formerly Bioanalysis) Working Group of the CCQM. It was coordinated by LGC (United Kingdom) with the support of National Institute of Standards and Technology (USA) and results were submitted from thirteen National Metrology Institutes and Designated Institutes. The majority of laboratories performed RNA measurements using RT-qPCR, with datasets also being submitted by two laboratories based on reverse transcription digital polymerase chain reaction and one laboratory using a next-generation sequencing method. In RT-qPCR analysis, the RNA copy number ratios between the two samples were quantified using either a standard curve or a relative quantification approach. In general, good agreement was observed between the reported results of ERCC RNA copy number ratio measurements. Measurements of the RNA copy number ratios for endogenous genes between the two samples were also consistent between the majority of laboratories. Some differences in the reported values and confidence intervals ('measurement uncertainties') were noted which may be attributable to choice of measurement method or quantification approach. This highlights the need for standardised practices for the calculation of fold change ratios and uncertainties in the area of gene expression profiling.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1016/j.bdq.2016.05.003 | DOI Listing |

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4906133 | PMC |

June 2016

Talanta 2014 Dec 19;130:462-9. Epub 2014 Jul 19.

Laboratory of Government Chemist Ltd (LGC), Queens Road, Teddington TW11 0LY, Middlesex, UK.

Monte Carlo simulation of expert judgments on human errors in a chemical analysis was used for determination of distributions of the error quantification scores (scores of likelihood and severity, and scores of effectiveness of a laboratory quality system in prevention of the errors). The simulation was based on modeling of an expert behavior: confident, reasonably doubting and irresolute expert judgments were taken into account by means of different probability mass functions (pmfs). As a case study, 36 scenarios of human errors which may occur in elemental analysis of geological samples by ICP-MS were examined. Characteristics of the score distributions for three pmfs of an expert behavior were compared. Variability of the scores, as standard deviation of the simulated score values from the distribution mean, was used for assessment of the score robustness. A range of the score values, calculated directly from elicited data and simulated by a Monte Carlo method for different pmfs, was also discussed from the robustness point of view. It was shown that robustness of the scores, obtained in the case study, can be assessed as satisfactory for the quality risk management and improvement of a laboratory quality system against human errors.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1016/j.talanta.2014.07.036 | DOI Listing |

December 2014

J AOAC Int 2012 Sep-Oct;95(5):1433-9

LGC Ltd, Queens Rd, Teddington, Middlesex, TW11 0LY, United Kingdom.

Repeatability and reproducibility data for microbiological methods in food analysis were collated and assessed with a view to identifying useful or important trends. Generalized additive modeling for location, shape, and scale was used to model the distribution of variances. It was found that mean reproducibility for log10(CFU) data is largely independent of concentration, while repeatability SD of log10(CFU) data shows a strongly significant decrease in repeatability SD with increasing enumeration. The model for reproducibility SD gave a mean of 0.44, with an upper 95th percentile of approximately 0.76. Repeatability variance could be described reasonably well by a simple dichotomous model; at enumerations below 10(5)/g, the model for repeatability SD gave a mean of approximately 0.35 and upper 95th percentile of 0.63. Above 10(5)/g, the model gave a mean of 0.2 and upper 95th percentile of 0.36. A Horwitz-like function showed no appreciable advantage in describing the data set and gave apparently worse fit. The relationship between repeatability and reproducibility of log10(CFU) is not constant across the concentration range studied. Both repeatability and reproducibility were found to depend on matrix class and organism.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.5740/jaoacint.11-452 | DOI Listing |

December 2012

Anal Bioanal Chem 2011 Dec 15;401(10):3221-7. Epub 2011 Oct 15.

LGC Limited, Queens Road, Teddington, UK.

A method of calibration for real-time quantitative polymerase chain reaction (qPCR) experiments based on the method of standard additions combined with non-linear curve fitting is described. The method is tested by comparing the results of a traditionally calibrated qPCR experiment with the standard additions experiment in the presence of 2 mM EDTA, a known inhibitor chosen to provide an unambiguous test of the principle by inducing an approximately twofold bias in apparent copy number calculated using traditional calibration. The standard additions method is shown to substantially reduce inhibitor-induced bias in quantitative real-time qPCR.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1007/s00216-011-5460-y | DOI Listing |

December 2011

Analyst 2008 Aug 29;133(8):992-7. Epub 2008 Apr 29.

LGC Ltd, Queens Road, Teddington, Middlesex, UK TW11 0LY.

Standard additions is a calibration technique devised to eliminate rotational matrix effects in analytical measurement. Although the technique is presented in almost every textbook of analytical chemistry, its behaviour in practice is not well documented and is prone to attract misleading accounts. The most important limitation is that the method cannot deal with translational matrix effects, which need to be handled separately. In addition, because the method involves extrapolation from known data, the method is often regarded as less precise than external calibration (interpolation) techniques. Here, using a generalised model of an analytical system, we look at the behaviour of the method of standard additions under a range of conditions, and find that, if executed optimally, there is no noteworthy loss of precision.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1039/b717660k | DOI Listing |

August 2008

Anal Bioanal Chem 2008 Jan 17;390(1):201-13. Epub 2007 Nov 17.

SP Technical Research Institute of Sweden, Box 857, 501 15, Borås, Sweden.

Consistent treatment of measurement bias, including the question of whether or not to correct for bias, is essential for the comparability of measurement results. The case for correcting for bias is discussed, and it is shown that instances in which bias is known or suspected, but in which a specific correction cannot be justified, are comparatively common. The ISO Guide to the Expression of Uncertainty in Measurement does not provide well for this situation. It is concluded that there is a need for guidance on handling cases of uncorrected bias. Several different published approaches to the treatment of uncorrected bias and its uncertainty are critically reviewed with regard to coverage probability and simplicity of execution. On the basis of current studies, and taking into account testing laboratory needs for a simple and consistent approach with a symmetric uncertainty interval, we conclude that for most cases with large degrees of freedom, linear addition of a bias term adjusted for exact coverage ("U(e)") as described by Synek is to be preferred. This approach does, however, become more complex if degrees of freedom are low. For modest bias and low degrees of freedom, summation of bias, bias uncertainty and observed value uncertainty in quadrature ("RSSu") provides a similar interval and is simpler to adapt to reduced degrees of freedom, at the cost of a more restricted range of application if accurate coverage is desired.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1007/s00216-007-1693-1 | DOI Listing |

January 2008

BMC Biotechnol 2006 Jul 6;6:33. Epub 2006 Jul 6.

Analytical Technology, LGC Limited, Teddington, TW11 0LY, UK.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1186/1472-6750-6-33 | DOI Listing |

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1559608 | PMC |

July 2006

Analyst 2006 Jun 11;131(6):710-7. Epub 2006 May 11.

LGC Limited, Queens Road, Teddington, Middlesex, UK TW11 0LY.

Different methods of treating data which lie close to a natural limit in a feasible range, such as zero or 100% mass or mole fraction, are discussed and recommendations made concerning the most appropriate. The methods considered include discarding observations beyond the limit, shifting observations to the limit, truncation of a classical confidence interval based on Student's t (coupled with shifting the result to the limit if outside the feasible range), truncation and renormalisation of an assumed normal distribution, and the maximum density interval of a Bayesian estimate based on a normal measurement distribution and a uniform prior within the feasible range. Based on consideration of bias and simulation to assess coverage, it is recommended that for most purposes, a confidence interval near a natural limit should be constructed by first calculating the usual confidence interval based on Student's t, then truncating the out-of-range portion to leave an asymmetric interval and adjusting the reported value to within the resulting interval if required. It is suggested that the original standard uncertainty is retained for uncertainty propagation purposes.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1039/b518084h | DOI Listing |

June 2006

J AOAC Int 2006 Jan-Feb;89(1):232-9

University of London, Birkbeck College, School of Biological and Chemical Sciences, Malet St, London, United Kingdom.

The study considers data from 2 UK-based proficiency schemes and includes data from a total of 29 rounds and 43 test materials over a period of 3 years. The results from the 2 schemes are similar and reinforce each other. The amplification process used in quantitative polymerase chain reaction determinations predicts a mixture of normal, binomial, and lognormal distributions dominated by the latter 2. As predicted, the study results consistently follow a positively skewed distribution. Log-transformation prior to calculating z-scores is effective in establishing near-symmetric distributions that are sufficiently close to normal to justify interpretation on the basis of the normal distribution.

## Download full-text PDF |
Source |
---|

May 2006

Analyst 2002 Jun;127(6):818-24

Department of Statistical Science, University College London, UK.

The choice of an analytical procedure and the determination of an appropriate sampling strategy are here treated as a decision theory problem in which sampling and analytical costs are balanced against possible end-user losses due to measurement error. Measurement error is taken here to include both sampling and analytical variances, but systematic errors are not considered. The theory is developed in detail for the case exemplified by a simple accept or reject decision following an analytical measurement on a batch of material, and useful approximate formulae are given for this case. Two worked examples are given, one involving a batch production process and the other a land reclamation site.

## Download full-text PDF |
Source |
---|---|

http://dx.doi.org/10.1039/b111465d | DOI Listing |

June 2002

-->