{"id":356036,"date":"2024-10-20T01:10:38","date_gmt":"2024-10-20T01:10:38","guid":{"rendered":"https:\/\/pdfstandards.shop\/product\/uncategorized\/bs-iso-135282015\/"},"modified":"2024-10-26T01:31:12","modified_gmt":"2024-10-26T01:31:12","slug":"bs-iso-135282015","status":"publish","type":"product","link":"https:\/\/pdfstandards.shop\/product\/publishers\/bsi\/bs-iso-135282015\/","title":{"rendered":"BS ISO 13528:2015"},"content":{"rendered":"
PDF Pages<\/th>\n | PDF Title<\/th>\n<\/tr>\n | ||||||
---|---|---|---|---|---|---|---|
7<\/td>\n | Foreword <\/td>\n<\/tr>\n | ||||||
8<\/td>\n | 0\tIntroduction <\/td>\n<\/tr>\n | ||||||
11<\/td>\n | 1\tScope 2\tNormative references 3\tTerms and definitions <\/td>\n<\/tr>\n | ||||||
14<\/td>\n | 4\tGeneral principles 4.1\tGeneral requirements for statistical methods <\/td>\n<\/tr>\n | ||||||
15<\/td>\n | 4.2\tBasic model 4.3\tGeneral approaches for the evaluation of performance <\/td>\n<\/tr>\n | ||||||
16<\/td>\n | 5\tGuidelines for the statistical design of proficiency testing schemes 5.1\tIntroduction to the statistical design of proficiency testing schemes 5.2\tBasis of a statistical design <\/td>\n<\/tr>\n | ||||||
17<\/td>\n | 5.3\tConsiderations for the statistical distribution of results <\/td>\n<\/tr>\n | ||||||
18<\/td>\n | 5.4\tConsiderations for small numbers of participants 5.5\tGuidelines for choosing the reporting format <\/td>\n<\/tr>\n | ||||||
20<\/td>\n | 6\tGuidelines for the initial review of proficiency testing items and results 6.1\tHomogeneity and stability of proficiency test items <\/td>\n<\/tr>\n | ||||||
21<\/td>\n | 6.2\tConsiderations for different measurement methods 6.3\tBlunder removal 6.4\tVisual review of data <\/td>\n<\/tr>\n | ||||||
22<\/td>\n | 6.5\tRobust statistical methods 6.6\tOutlier techniques for individual results <\/td>\n<\/tr>\n | ||||||
23<\/td>\n | 7\tDetermination of the assigned value and its standard uncertainty 7.1\tChoice of method of determining the assigned value <\/td>\n<\/tr>\n | ||||||
24<\/td>\n | 7.2\tDetermining the uncertainty of the assigned value <\/td>\n<\/tr>\n | ||||||
25<\/td>\n | 7.3\tFormulation 7.4\tCertified reference material <\/td>\n<\/tr>\n | ||||||
26<\/td>\n | 7.5\tResults from one laboratory <\/td>\n<\/tr>\n | ||||||
27<\/td>\n | 7.6\tConsensus value from expert laboratories <\/td>\n<\/tr>\n | ||||||
28<\/td>\n | 7.7\tConsensus value from participant results <\/td>\n<\/tr>\n | ||||||
29<\/td>\n | 7.8\tComparison of the assigned value with an independent reference value <\/td>\n<\/tr>\n | ||||||
30<\/td>\n | 8\tDetermination of criteria for evaluation of performance 8.1\tApproaches for determining evaluation criteria 8.2\tBy perception of experts 8.3\tBy experience from previous rounds of a proficiency testing scheme <\/td>\n<\/tr>\n | ||||||
31<\/td>\n | 8.4\tBy use of a general model <\/td>\n<\/tr>\n | ||||||
32<\/td>\n | 8.5\tUsing the repeatability and reproducibility standard deviations from a previous collaborative study of precision of a measurement method 8.6\tFrom data obtained in the same round of a proficiency testing scheme <\/td>\n<\/tr>\n | ||||||
33<\/td>\n | 8.7\tMonitoring interlaboratory agreement 9\tCalculation of performance statistics 9.1\tGeneral considerations for determining performance <\/td>\n<\/tr>\n | ||||||
34<\/td>\n | 9.2\tLimiting the uncertainty of the assigned value <\/td>\n<\/tr>\n | ||||||
35<\/td>\n | 9.3\tEstimates of deviation (measurement error) <\/td>\n<\/tr>\n | ||||||
36<\/td>\n | 9.4\tz scores <\/td>\n<\/tr>\n | ||||||
37<\/td>\n | 9.5\tz\u2032 scores <\/td>\n<\/tr>\n | ||||||
38<\/td>\n | 9.6\tZeta scores (\u03b6) <\/td>\n<\/tr>\n | ||||||
39<\/td>\n | 9.7\tEn scores 9.8\tEvaluation of participant uncertainties in testing <\/td>\n<\/tr>\n | ||||||
40<\/td>\n | 9.9\tCombined performance scores <\/td>\n<\/tr>\n | ||||||
41<\/td>\n | 10\tGraphical methods for describing performance scores 10.1\tApplication of graphical methods 10.2\tHistograms of results or performance scores <\/td>\n<\/tr>\n | ||||||
42<\/td>\n | 10.3\tKernel density plots <\/td>\n<\/tr>\n | ||||||
43<\/td>\n | 10.4\tBar-plots of standardized performance scores 10.5\tYouden Plot <\/td>\n<\/tr>\n | ||||||
44<\/td>\n | 10.6\tPlots of repeatability standard deviations <\/td>\n<\/tr>\n | ||||||
45<\/td>\n | 10.7\tSplit samples <\/td>\n<\/tr>\n | ||||||
46<\/td>\n | 10.8\tGraphical methods for combining performance scores over several rounds of a proficiency testing scheme <\/td>\n<\/tr>\n | ||||||
47<\/td>\n | 11\tDesign and analysis of qualitative proficiency testing schemes (including nominal and ordinal properties) 11.1\tTypes of qualitative data 11.2\tStatistical design <\/td>\n<\/tr>\n | ||||||
48<\/td>\n | 11.3\tAssigned values for qualitative proficiency testing schemes <\/td>\n<\/tr>\n | ||||||
49<\/td>\n | 11.4\tPerformance evaluation and scoring for qualitative proficiency testing schemes <\/td>\n<\/tr>\n | ||||||
52<\/td>\n | Annex\u00a0A (normative) Symbols <\/td>\n<\/tr>\n | ||||||
54<\/td>\n | Annex\u00a0B (normative) Homogeneity and stability of proficiency test items <\/td>\n<\/tr>\n | ||||||
62<\/td>\n | Annex\u00a0C (normative) Robust analysis <\/td>\n<\/tr>\n | ||||||
73<\/td>\n | Annex\u00a0D (informative) Additional Guidance on Statistical Procedures <\/td>\n<\/tr>\n | ||||||
77<\/td>\n | Annex\u00a0E (informative) Illustrative Examples <\/td>\n<\/tr>\n | ||||||
99<\/td>\n | Bibliography <\/td>\n<\/tr>\n<\/table>\n","protected":false},"excerpt":{"rendered":" Statistical methods for use in proficiency testing by interlaboratory comparison<\/b><\/p>\n |