Using repeated measurements to improve the standard uncertainty
Technical notes | 2016 | EurachemInstrumentation
Repeated measurements are a fundamental tool in analytical chemistry for quantifying the contribution of random variation to measurement uncertainty. Proper treatment of repeated data improves confidence in reported mean values, informs decisions about method performance (repeatability, intermediate precision, reproducibility) and supports traceability and quality control activities. Understanding when and how the standard uncertainty of a mean can be reduced by additional measurements is essential for robust uncertainty budgets and for avoiding under- or overestimation of uncertainty in routine and research laboratories.
The text explains the conditions under which the standard uncertainty of a mean decreases with repeated measurements and when the simple reduction formula is valid or invalid. It aims to:
The methodological focus is statistical rather than instrumental. The key equation stated is the standard uncertainty of the sample mean: u(xbar) = s / sqrt(n), where s is the observed standard deviation of the set of measurements and n is the number of independent observations. Important conditions for applying this equation are:
When these conditions are not met, more advanced statistical treatments are required, including the use of grouped summary statistics, analysis of variance (ANOVA), or models that accommodate serial correlation.
The examples refer generically to common laboratory instruments and procedures rather than to specific models. Instruments and procedures mentioned include:
Key points and implications from the discussion are:
Applying the correct treatment of repeated measurements provides:
Developments and directions relevant to handling repeated measurements and uncertainty include:
The classical reduction of standard uncertainty for a mean (u(xbar) = s / sqrt(n)) is a powerful and simple result but depends critically on independence and stability of measurement conditions. Many common laboratory situations—grouped measurements arising from daily calibrations or serially correlated measurements due to drift—violate those assumptions. In such cases, use grouped-summary approaches, ANOVA, or statistical models that explicitly account for correlation. Awareness of these distinctions prevents underestimation of uncertainty and supports better decision making in method validation, calibration and quality control.
Other
IndustriesOther
ManufacturerSummary
Importance of the topic
Repeated measurements are a fundamental tool in analytical chemistry for quantifying the contribution of random variation to measurement uncertainty. Proper treatment of repeated data improves confidence in reported mean values, informs decisions about method performance (repeatability, intermediate precision, reproducibility) and supports traceability and quality control activities. Understanding when and how the standard uncertainty of a mean can be reduced by additional measurements is essential for robust uncertainty budgets and for avoiding under- or overestimation of uncertainty in routine and research laboratories.
Objectives and overview of the study
The text explains the conditions under which the standard uncertainty of a mean decreases with repeated measurements and when the simple reduction formula is valid or invalid. It aims to:
- Present the basic relationship between sample standard deviation and uncertainty of the mean.
- Clarify assumptions required for that relationship to hold (independence and stability of conditions).
- Provide practical examples showing valid and invalid uses of the formula, and indicate appropriate alternatives where correlation or grouping occurs.
Methods and instrumentation
The methodological focus is statistical rather than instrumental. The key equation stated is the standard uncertainty of the sample mean: u(xbar) = s / sqrt(n), where s is the observed standard deviation of the set of measurements and n is the number of independent observations. Important conditions for applying this equation are:
- Observations are independent.
- Measurements are made under stable, comparable conditions (repeatability, within-laboratory reproducibility/intermediate precision, or reproducibility, depending on the study design).
When these conditions are not met, more advanced statistical treatments are required, including the use of grouped summary statistics, analysis of variance (ANOVA), or models that accommodate serial correlation.
Instrumentation used
The examples refer generically to common laboratory instruments and procedures rather than to specific models. Instruments and procedures mentioned include:
- Volumetric glassware (volumetric pipette) used in calibration exercises.
- Routine measurement systems subject to calibration and drift (general analytical instruments).
- Internal quality control (IQC) procedures with daily calibration and duplicate QC measurements.
Main results and discussion
Key points and implications from the discussion are:
- The standard deviation of repeated measurements quantifies the random component of uncertainty for individual observations.
- For the uncertainty of the mean of n independent measurements, the standard uncertainty reduces according to u(xbar) = s / sqrt(n). This reduces random uncertainty as n increases, but applies only when independence and stability assumptions are satisfied.
- Example (valid use): For inhomogeneous test materials where sampling variability dominates, multiple independent sample portions measured under repeatability conditions justify using the reduced uncertainty of the mean to represent repeatability uncertainty.
- Examples (invalid use): Grouped or correlated measurements violate the independence assumption:
- Measurements taken in groups (e.g., duplicate QC measurements performed each day with a single calibration per day) contain a common calibration error across duplicates; duplicates are not independent. In that case, calculate the standard deviation of daily means and divide by sqrt(number of days) or apply ANOVA to separate within- and between-group components.
- Time-dependent measurements (e.g., instrument drift or changing sample concentration) generate serial correlation. Successive errors are partially carried over, breaking independence. Correlation-aware statistical methods (time-series analysis, regression with time terms, mixed models with autocorrelation structures) are required to estimate uncertainty correctly.
- Where grouping or correlation is present, using the simple s/sqrt(n) will generally understate uncertainty because it ignores common or correlated error components.
Benefits and practical applications
Applying the correct treatment of repeated measurements provides:
- More accurate and defensible uncertainty estimates for means reported in certificates, method validation reports and QC charts.
- Improved resource allocation by identifying whether additional replicate measurements will materially reduce uncertainty.
- Clearer separation of variability sources (within-run, between-run, operator, instrument), enabling targeted method improvement and control strategies.
Future trends and potential applications
Developments and directions relevant to handling repeated measurements and uncertainty include:
- Broader uptake of mixed-effects and time-series models in routine uncertainty evaluation to handle correlated and hierarchical data structures.
- Integration of automated QC and calibration metadata with statistical workflows so grouping factors (day, instrument, operator) are automatically incorporated into uncertainty calculations.
- Use of simulation (bootstrap, Monte Carlo) where analytical solutions are complex, to propagate uncertainty in the presence of correlation and nonstandard data structures.
- Standardisation of practical guidance and software tools that implement ANOVA-based and correlation-aware uncertainty methods for routine laboratory use.
Conclusion
The classical reduction of standard uncertainty for a mean (u(xbar) = s / sqrt(n)) is a powerful and simple result but depends critically on independence and stability of measurement conditions. Many common laboratory situations—grouped measurements arising from daily calibrations or serially correlated measurements due to drift—violate those assumptions. In such cases, use grouped-summary approaches, ANOVA, or statistical models that explicitly account for correlation. Awareness of these distinctions prevents underestimation of uncertainty and supports better decision making in method validation, calibration and quality control.
References
- Eurolab Technical Report 1/2006: Guide to the Evaluation of Measurement Uncertainty for Quantitative Test Results, Appendix A.5.
- Produced by the Eurachem/CITAC Measurement Uncertainty and Traceability Working Group, Second English edition, 2016.
Content was automatically generated from an orignal PDF document using AI and may contain inaccuracies.
Similar PDF
Rethinking calibration as a statistical estimation problem to improve measurement accuracy
2025||Scientific articles
Analytica Chimica Acta 1372 (2025) 344395 Contents lists available at ScienceDirect Analytica Chimica Acta journal homepage: www.elsevier.com/locate/aca Rethinking calibration as a statistical estimation problem to improve measurement accuracy Song S. Qian a ,∗, Sabrina Jaffe a , Emanuela Gionfriddo b,d…
Key words
bhm, bhmestimation, estimationcalibration, calibrationbayesian, bayesiancoefficients, coefficientsuncertainty, uncertaintyinverse, inverseestimated, estimatedconcentrations, concentrationsestimator, estimatorestimating, estimatingfunction, functionmodel, modelshrinkage, shrinkagecurve
A Bayesian hierarchical modeling approach can improve measurement accuracy of microcystin concentrations
2025||Scientific articles
Chemosphere 384 (2025) 144481 Contents lists available at ScienceDirect Chemosphere journal homepage: www.elsevier.com/locate/chemosphere Research Paper A Bayesian hierarchical modeling approach can improve measurement accuracy of microcystin concentrationsI Sabrina Jaffe a Song S. Qian a a ,∗, Duane Gossiaux b ,…
Key words
bhm, bhmbayesian, bayesianestimation, estimationposterior, posteriortests, testsupdating, updatingelisa, elisacurve, curve𝜇𝜃, 𝜇𝜃test, testapproach, approacherie, eriecalibration, calibrationcan, canhierarchical
Signal, Noise, and Detection Limits in Mass Spectrometry
2021||Technical notes
Application Note Chemical Analysis Signal, Noise, and Detection Limits in Mass Spectrometry Authors Greg Wells, Harry Prest, and Charles William Russ IV, Agilent Technologies, Inc. Abstract In the past, the signal-to-noise of a chromatographic peak determined from a single measurement…
Key words
idl, idlsignal, signalnoise, noiseanalyte, analytepopulation, populationestimate, estimatemeasurements, measurementsmean, meandeviation, deviationbackground, backgroundvalue, valuestatistically, statisticallygenerally, generallyamount, amountfrom
EffiChem 5.0 software for easier lab compliance and operation
2021||Brochures and specifications
EffiChem 5.0 software for easier lab compliance and operation Already 5th version of the EffiChem software helps pharmaceutical companies and ISO 17025 accredited laboratories. Why EffiChem 5.0: Data integrity In compliance with the current regulations and standards, according to authority…
Key words
management, managementlaboratory, laboratoryeffichem, effichemlims, limsrecords, recordsmodule, moduleuncertainty, uncertaintyshall, shallvalidation, validationqms, qmsmodules, modulescontrol, controlcharts, chartsfunctionalities, functionalitiestraining