BS ISO 13528:2022
$215.11
Statistical methods for use in proficiency testing by interlaboratory comparison
Published By | Publication Date | Number of Pages |
BSI | 2022 | 104 |
PDF Catalog
PDF Pages | PDF Title |
---|---|
2 | National foreword |
7 | Foreword |
8 | 0 Introduction |
11 | 1 Scope 2 Normative references 3 Terms and definitions |
14 | 4 General principles 4.1 General requirements for statistical methods |
15 | 4.2 Basic model 4.3 General approaches for the evaluation of performance |
16 | 5 Guidelines for the statistical design of proficiency testing schemes 5.1 Introduction to the statistical design of proficiency testing schemes 5.2 Basis of a statistical design |
17 | 5.3 Considerations for the statistical distribution of results |
18 | 5.4 Considerations for small numbers of participants 5.5 Guidelines for choosing the reporting format 5.5.1 General requirements for reporting format |
19 | 5.5.2 Reporting of replicate measurements 5.5.3 Reporting of āless thanā or āgreater thanā a limit (censored data) 5.5.4 Number of significant digits |
20 | 6 Guidelines for the initial review of proficiency testing items and results 6.1 Homogeneity and stability of proficiency test items |
21 | 6.2 Considerations for different measurement methods 6.3 Blunder removal |
22 | 6.4 Visual review of data 6.5 Robust statistical methods |
23 | 6.6 Outlier techniques for individual results |
24 | 7 Determination of the assigned value and its standard uncertainty 7.1 Choice of method of determining the assigned value 7.2 Determining the uncertainty of the assigned value |
25 | 7.3 Formulation |
26 | 7.4 Certified reference material 7.5 Results from one laboratory |
27 | 7.6 Consensus value from expert laboratories |
28 | 7.7 Consensus value from participant results |
29 | 7.8 Comparison of the assigned value with an independent reference value |
30 | 8 Determination of criteria for evaluation of performance 8.1 Approaches for determining evaluation criteria |
31 | 8.2 By perception of experts 8.3 By experience from previous rounds of a proficiency testing scheme 8.4 By use of a general model |
32 | 8.5 Using the repeatability and reproducibility standard deviations from a previous collaborative study of precision of a measurement method 8.6 From data obtained in the same round of a proficiency testing scheme |
33 | 8.7 Monitoring interlaboratory agreement |
34 | 9 Calculation of performance statistics 9.1 General considerations for determining performance 9.2 Limiting the uncertainty of the assigned value |
35 | 9.3 Estimates of deviation (measurement error) |
36 | 9.4 z scores |
37 | 9.5 zā² scores |
38 | 9.6 Zeta scores (Ī¶) |
39 | 9.7 En scores |
40 | 9.8 Evaluation of participant uncertainties in testing |
41 | 9.9 Combined performance scores 10 Graphical methods for describing performance scores 10.1 Application of graphical methods |
42 | 10.2 Histograms of results or performance scores 10.3 Kernel density plots |
44 | 10.4 Bar-plots of standardized performance scores 10.5 Youden plot |
45 | 10.6 Plots of repeatability standard deviations |
46 | 10.7 Split samples 10.8 Graphical methods for combining performance scores over several rounds of a proficiency testing scheme |
47 | 11 Design and analysis of qualitative proficiency testing schemes (including nominal and ordinal properties) 11.1 Types of qualitative data |
48 | 11.2 Statistical design 11.3 Assigned values for qualitative proficiency testing schemes |
50 | 11.4 Performance evaluation and scoring for qualitative proficiency testing schemes |
52 | Annex A (normative) Symbols |
54 | Annex B (informative) Homogeneity and stability of proficiency test items |
62 | Annex C (informative) Robust analysis |
73 | Annex D (informative) Additional guidance on statistical procedures |
78 | Annex E (informative) Illustrative examples |
101 | Annex F (Informative) Example of computer code for plotting and resampling analysis (ābootstrappingā) of PT results |
102 | Bibliography |