Statistical methods for use in proficiency testing by interlaboratory comparisons
Introduction to standards:
GB/T 28043-2011 Statistical methods for proficiency testing using inter-laboratory comparisons
GB/T28043-2011
Standard download decompression password: www.bzxz.net
This standard specifies statistical methods for data analysis in proficiency testing schemes and provides recommendations for the use of the above methods by participants in proficiency testing schemes and accreditation bodies in actual work.
This standard is applicable to the verification of the absence of obvious unacceptable bias in laboratory measurement results.
This standard is applicable to quantitative data but not to qualitative data.
This standard was drafted in accordance with the rules given in GB/T1.1-2009 (except for technical content).
This standard is equivalent to the international standard ISO13528:2005 "Statistical methods for proficiency testing using inter-laboratory comparisons". The following modifications and corrections have been made to the errors in ISO13528:2005:
——— In the third paragraph of 4.3, "repeatability standard deviation" and "capability assessment standard deviation" are corrected to "repeatability variance" and "capability assessment variance" respectively;
——— In the second paragraph of 5.6.1, "Formula (53)" is corrected to "Formula (C.1)";
——— In 7.1.3, "Figures 1 and 2" are corrected to "Figures 2 and 3";
——— In 7.2.3, "Figures 1 and 3" are corrected to "Figures 2 and 4";
——— In 7.4.3, "Figures 1 and 4" are corrected to "Figures 2 and 5";
——— In 7.7.2, "X±Ux" is corrected to "x±Ux";
———Correct “7.4 or 7.6” in the second paragraph of 7.8 to “5.4 or 5.6”, and correct “7.5” to “5.5”;
———Correct “uX =1.23×s*/ 181=13×10-10” in 7.9.2 to “uX =1.25×s*/ 181=13×10-10”;
———Correct “p>16” in 7.9.4 to “p>17”;
———Correct “sx” in Appendix A to “sx”;
———Correct “standard deviation” in the fourth and fifth rows of Appendix B.2 to “variance”;
———Correct “xt,.” in formula (B.4) and formula (B.7) in Appendix B.3 to “.xt,.”.
Appendix A, Appendix B and Appendix C of this standard are all normative appendices.
This standard is proposed and managed by the National Technical Committee for Standardization of Statistical Methods (SAC/TC21).
Drafting units of this standard: China National Institute of Standardization, China National Accreditation Service for Conformity Assessment, Institute of Mathematics and Systems Science of the Chinese Academy of Sciences, Beijing University of Technology, Technical Center of Shandong Inspection and Quarantine Bureau, Liaoning Entry-Exit Inspection and Quarantine Bureau.
Main drafters of this standard: Zhang Fan, Ding Wenxing, Tian Ling, Xie Tianfa, Yu Zhenfan, Zhai Peijun, Feng Shiyong, Guo Wu, Zheng Jiang, Chen Zhimin.
The following documents are indispensable for the application of this document. For any dated referenced document, only the dated version applies to this document. For any undated referenced document, its latest version (including all amendments) applies to this document.
GB/T3358.1—2010 Statistical Vocabulary and Symbols Part 1: General Statistical Terms and Terms Used in Probability (ISO3534-1:2006, IDT)
GB/T3358.2—2010 Statistical Vocabulary and Symbols Part 2: Applied Statistics (ISO3534-2:2006, IDT)
GB/T6379.1—2004 Accuracy (Trueness and Precision) of Measurement Methods and Results Part 1: General Principles and Definitions (ISO5725-1:1994, IDT)
GB/T15483.1—1999 Proficiency Testing Using Interlaboratory Comparisons Part 1: Establishment and Operation of Proficiency Testing Schemes (ISO/IECGuide43-1:1997, IDT)
Foreword III
Introduction IV
1 Scope 1
2 Normative references 1
3 Terms and definitions 1
4 Statistical guidance for the design and interpretation of laboratory proficiency testing (see GB/T 15483.1-1999, 5.4.2) 2
4.1 Action signals and warning signals 2
4.2 Limits on the uncertainty of the specified value 2
4.3 Determination of the number of replicate measurements 3
4.4 Homogeneity and stability of samples (see GB/T 15483.1-1999, 5.6.2 and 5.6.3) 3
4.5 Definition of measurement methods 3
4.6 Data reporting (see GB/T 15483.1-1999, 6.2.3) 4
4.7 Validity period of proficiency testing results 4
5 Determination of designated values and their standard uncertainties 4
5.1 Selection of methods for determining designated values 4
5.2 Matching methods (see GB/T 15483.1-1999, A.1.1, a) 4
5.3 Certified reference values (see GB/T 15483.1-1999, A.1.1, b) 5
5.4 Reference values (see GB/T 15483.1-1999, A.1.1, c) 5
5.5 Expert laboratory consensus values (see GB/T 15483.1-1999, A.1.1, d) 7
5.6 Participants’ consensus value (see GB/T 15483.1-1999, A.1.1, e) 7
5.7 Comparison of specified values 11
5.8 Missing values 12
6 Determination of the standard deviation of capability assessment (see GB/T 15483.1-1999, A.2.1.3) 12
6.1 Selection of method 12
6.2 Determined by specified value 12
6.3 Determined by empirical expected value 12
6.4 Determined by general model 13
6.5 Determined by precision test results 14
6.6 Determined by data obtained from a round of proficiency testing plan 14
6.7 Comparison of the precision obtained from proficiency testing with the known precision of the measurement method14
7 Calculation of performance statistics 15
7.1 Estimation of laboratory bias (see GB/T 15483.1-1999, A.2.1.4, a) 15
7.2 Percent relative difference (see GB/T 15483.1-1999, A.2.1.4, b) 17
7.3 Rank and rank percentage (see GB/T 15483.1-1999, A.2.1.4, c) 19
7.4 z value (see GB/T 15483.1-1999, A.2.1.4, d) 21
7.5 En value (see GB/T 15483.1-1999, A.2.1.4, e) 23
7.6 7.9 Examples of data analysis when uncertainty is reported24
7.10
Combined
performance statistics28
8 Graphical presentation of combined performance statistics for multiple measurands in a proficiency testing program28
8.1 Application28
8.2 Histograms of performance statistics28
8.3 Bar graphs of standardized laboratory bias29
8.4 Bar graphs of standardized repeatability measurements30
8.5 Yao Dun plot30
8.6 Repeatability standard deviation plot36
8.7 Split samples (see GB/T 15483.1-1999, A.3.1.2) 38
9 Graphical method for performance statistics in combined multi-round proficiency testing plans (see GB/T 15483.1-1999, A.3.2) 41
9.1 Application41
9.2 Conventional control charts for z values41
9.3 Cumulative and control charts for z values43
9.4 Plots of laboratory standardized bias against the mean43
9.5 Point diagrams44
Appendix A (Normative Appendix) Symbols46
Appendix B (Normative Appendix) Tests for uniformity and stability of samples47
Appendix C (Normative Appendix) Robust analysis50
References52
Some standard content:
126 03.120.30
Chinese Standard for Engineering
G/1 28643--2011/150 13528:2005 Statistical methods for proficiency testing by inter-laboratory comparisons Statistical methods for use in proficiency testing by laboratory comparisons (130 13528:2005,1Dr)
2011~10-31 Approved
General Administration of Quality Supervision, Inspection and Quarantine of the People's Republic of China National Standardization Administration
2012-02-61 Implementation
National Standard of the People's Republic of China
Statistical methods for proficiency testing by inter-laboratory comparisons GR/T 28043—2011/1S0 135?8:2005Published by China Standards Press
No. 2, West Hepingli Street, Chaoyang District, Beijing (100013) No. 16, Sanxing Hebei Street, Xicheng District, Beijing (1C0025) Website: spc,net.cn
Editorial Office: (010)34275323Distribution Center: (00)1780235Reading Service Department: (01068523946
Printed by Qin Xinggao, China Standards Press
Distributed by Xinhua Bookstores in various places
Format: 880×12301/16
That is, 3.7 pages per pageNumber of words: 110,000
First edition in 2012
Published in March 2012, the first printing
: :55066 -1 44297
Reported by the issuing center of our company
If there is any printing error
Copyright infringement will be investigated
Report phone: (010)68510107
Normative reference documents
Terms and definitions
Gn/T 28043—2011/0 13528:2005 4
Design and interpretation of experimental cases Statistical criteria for capability testing (see GB/T 15283.1-1999, 5.4.2) 4.1 Action signals and warning signals
4.2 Limitation of uncertainty of specified values
4.3 Determination of the number of repeated measurements
4.4 Homogeneity and stability (see GB/T 15483.1-1990, 5.6.2 and 15.6.3) 4.6 Definition of measurement method
4.6 Data reporting (see GB3/T 11483.1-1990.6.2.3) 47 Validity period of capability test results
5 Determination of specified values and their standard uncertainty
5.1 Method for determining specified values
Method selection (see GB/T 15483.1-1990, 5.6.2 and 15.6.3) 1—1999,A. 1. 1,&)5.2
Certified reference low (see G13/15183,
[99,,A. 1. 1,1)
5.4 Reference value (currently GB/T 15483.1--1999,A. 1. 1, e) 5.5 Expert laboratory consensus value (see G13/T15483.: 1999, A.1.1, d) 5.6 Participant consensus value (see B3/T15483.1-1999.A1.,) 5.7 Comparison of specified values
5.8 Plating class 100
6 Determination of capability assessment tolerance (see GB/I15483.11999, A.2.1, 3)
6, 6.2 Determination of specified values 6.3 Virtual experience with expected values 6.4 Determination of the same 6.5 Determination of precision test results 6.5 Determination of the precision obtained by the performance test with the known precision of the measurement method 6.2 Calculation of performance statistics 7.1 Experimental case (C3/T) 15483.1--1959,A.2.1.4,a)7.2 Percentage (see G3/T 15483.1-2933,A.2.1.4,b)7.3 Rank and rank gate score (see B/T483:1919,A, 2. [. 4,t)*.
7.42 Value (now R/T 15483.1-1S99.4.2, J. 4,d)7.5, general (see G/15483.1: [999,A.2.1.4,)12bzxZ.net
GR/T28043--2011/IS013528:20057.6 2 Values -
7.9 Example of data analysis when negative uncertainty is reported 7.1C Combined performance statistics
Graphical method for combined performance statistics of multiple measured quantities in a proficiency testing program 8.1
Application
Histogram of performance statistics
Bar chart of standardized laboratory bias -
Bar chart of standardized maximum values of multiple measurements: Yaojiao chart
Basic standard deviation chart
Special samples (current G13/T15483.1:199, A.3.1.2) 9 Graphical method for performance statistics in multiple proficiency testing programs (see G13/T15483.1-199, A.3.2) 9.
Application
9.2 Routine control of values -
9. 3 Death value sodium cumulative benefit control chart
9. Experimental narrow standardized bias against the mean value 9.5 Dot plot
Appendix A (Normative Appendix)
Appendix B (Normative Appendix)
Appendix C (Normative Anonymous Decline)
References
Sample homogeneity test and stability test Robust analysis
G5/T 28043-2011/F60 13528:2005 This standard is in accordance with the test guidelines given in GI3/T11--2C09 (except for technical content). This standard is equivalent to the international standard ISO 13528, 2005 "Statistical methods for capability verification by experimental space comparison". The minor errors of ISO 13528:2005 have been modified and exempted. 1. In the "parameter error" of 4.3, "the standard error of capability assessment" and "the standard error of capability assessment" are corrected to "the standard error of capability assessment" and "the standard error of capability assessment" respectively.
In the first paragraph of 5.6.1, "Formula (53)\ is corrected to "Formula (C, 1)”;…-Reverse “Figure 2 and Figure 3” in 7.1.3; Correct “Figure 1 and” in 7.2.3 to “Figure 2 and Figure 4”; Correct “Figure 1 and: 4” in 7.4.3 to “Figure 2 and Figure 5”: \--Change “x±0” in 7.7.2 to “+5”; *--Correct “7.1 minus 7.G” in 7.8 to “5.4 or 5.6”; Correct “7.5” to “5.5”;--Change “x±0” in 7.7.2 to “+5”; *--Correct “7.1 minus 7.G” in 7.8 to “5.4 or 5.6”; Correct “7.5” in 7.9.2 to “5.5”; \x=1.23×//1813×10-0\ is \ux-.25×*/181-13×10-10”--Change “16” in 7.9.4 to “1”
Change “” in Appendix A to “\
”: Change the “difference” between the fourth and the second rows of Appendix B.&South to “square test”; Change the “type (trial.)” in the Appendix to “”
The recording, reduction and chrysanthemum recording of this standard are all acquired appendices. The statistical method for the entire period of this standard should be improved and standardized by the Chemical Technology Committee (SAC/2!) [1. The units of this project are: China Institute of Standardization and Chemical Technology, China National Accreditation Service for Conformity Assessment, Institute of Mathematics and Systems Science, Chinese Academy of Sciences, Technical Center of Shandong Inspection and Quarantine Bureau, Beijing University of Technology, and Lian Ningdao Inspection and Quarantine Bureau. The authors of this project are: Zhang Kai, Ding Wenxing, Yang Ling, Xie Chufa, Yu Zhenfan, Peijun, Hong Shangtiao, Guo Wu, Zheng Jiang, Chen Limin. GB/T28043—2011/IS013528:20050.1 Purpose of proficiency verification
Proficiency verification using inter-experimental comparison is used to determine the laboratory's ability to perform specific tests or measurements, and to monitor the laboratory's continued performance. For a detailed description of proficiency verification, see the introduction of GB/T15183.1. From a statistical point of view, the capability of an experiment is often characterized by three characteristics: experimental bias, stability, and reproducibility. Laboratory precision and repeatability are defined in GB/T3358.1, GB/T3358.2 and GB/T6379.1. The stability of laboratory test shrinkage can be measured by the intermediate precision defined in GB/T5379.3.
If standard materials are available, GB/T637D can be used.The procedure in 4 evaluates the experimental bias against the test material. In addition, proficiency testing using inter-laboratory comparisons provides a method to obtain information on experimental bias. Using proficiency testing data to estimate laboratory bias is also an important aspect of analyzing these data. Because stability and repeatability also affect the data obtained from proficiency testing, laboratories can use the experimental data from a round of proficiency testing to distinguish whether poor stability or poor repeatability is actually the cause. Therefore, regular evaluation of laboratory capability is very important. Stability assessment is obtained by retesting the retained samples, or by extended measurement of standard materials or internal standard materials (materials stored by the laboratory itself as special standard materials). This method is also given in GB/T (379.3. In proficiency testing, the stability of the laboratory can also be evaluated by drawing control curves, which is an important aspect of analyzing proficiency testing data. However, this method is not feasible in a single-round proficiency testing plan. The data suitable for repeated ratio evaluation can be obtained from the experimental standard work, or from an experiment conducted specifically with a repeatability evaluation query. Therefore, repeatability evaluation is not an important aspect of proficiency testing, although the repeatability of the testing laboratory is still very important. The range control chart method given in GB/T6379.6 can be used to evaluate repeatability. Figure 1 shows a flow chart of the application of statistical techniques in this standard. 0.2GB/F GB/T 15483.1 describes different types of proficiency testing plans and provides guidance on the organization and design of proficiency testing plans. GB/T 15483.2 provides guidance for laboratory accreditation bodies to select and use proficiency testing plans. These two standards should be used as reference documents in the field of proficiency testing (the content does not overlap with this standard). The appendix of GB/T 15283.1 briefly describes statistical methods in proficiency testing plans. This standard is a supplement to GB/T 15483.1 and provides detailed guidance on the use of statistical methods in proficiency testing. This standard is largely based on the draft harmonization of proficiency testing for analytical laboratories. However, it is not designed to be applicable to all test methods. The methods of verifying the value of the potential cost using the methods in the attached documents are as follows: 4. 5. 7 The reference value of the loss is not certain. If the method is appropriate, then 5. 7 The reference value can be compared with 6? The interpretation of the economic filter capacity test will be more precise and the specified times and its uncertainty are the reference values of the vehicle capacity test and adjustment. 5. 2 Method 5. 3 There is a limit to the reference value 5. 1 Reference to the enterprise capacity evaluation method of the large-scale capital science and technology of the enterprise capacity evaluation method 6. 2 Small original file is usually set 5. 3 High experience expectation low certainty 6. 4 8.3 Bar graph of the experimental load. 1. The test results are also plotted. 8. The sample is divided into the following: T3/T23043-2011/3013528:2005. 5.5 Expert laboratory consensus value 56 Participant consensus load 6. The accreditation group will use the graphical method of analyzing the values of the performance evaluation results in multiple rounds of proficiency testing. 9.2 The method is used to calculate the overall value of the instrument volume and the type of product. 1 The laboratory standard is the most special one. The fourth step is to issue a teaching report to the professor. 1 Flowchart of applying statistical techniques in implementing proficiency testing programs 1 B/T 28043—2011/ES0 13628:2005 Statistical methods for proficiency testing using inter-laboratory comparisons This standard specifies the statistical methods for data analysis in proficiency testing programs, and provides recommendations for participants in proficiency testing programs and accreditation bodies to use the above methods in their work. Where applicable, the absence of significant unacceptable deviations can be verified by the results of experimental measurements. This standard is not suitable for quantitative data. 2 Normative references
The following documents are indispensable for the application of this document. For all references to documents with an annotation period of II, only the version of the annotation period applies: For all references to documents without an annotation period of II, the latest version (including all amendments) applies. GB/T 3358.1-2010 Statistics and statistical symbols Part 1: Statistical techniques and their use in probability (IS) 3534-1.2006, DT) || TT || CB/T 3358.2-201C Statistics and statistical symbols Part 2: Statistical methods and results (correctness and precision) Part 1: General principles and definitions (IS) 5725-1:2006, 11YT) (B/T 6379.3-2004 Accuracy of measurement methods and results (correctness and precision information) Part 1: General principles and definitions (IS) 5725-1:2006, 11YT) || TT || CB/T 3358.2-201C Statistics and statistical symbols Part 2: 1994, IDY7)
13/1483.【,1995 Capability verification by laboratory comparison Part 1: Establishment and operation of capability verification plan (ISO/EC Guide:43-1:1997, JDT)
3 Terms and definitions
13/13358.1.GB/T3358.2, GB/T6379.1 and the following terms and definitions are applicable to this document. 3.1
Laboratory comparison
The evaluation of the performance of two or more laboratories or similar test objects under predetermined circumstances.
Proficiency verification practice
The determination of laboratory capability by comparison of laboratory cases. 3.3
assigned valueassigmelyalue
a specific value of a given country for which there is an appropriate degree of uncertainty, sometimes the value is adopted by agreement, 3.4
standard deviationstandard eleviationnprafiiney asscsamonibasic information used to measure the dispersion of a test, 3. G
valuez-srore
a standard deviation of a test that can be verified by a method of determining the value and standard deviation of a test. Note: The standard deviation is a ratio.
GB/T 28043---2011/IS0 13528:20053.6
coordinator
an organization or individual responsible for all coordination work in a proficiency testing scheme. 4 Statistical Guidelines for Designing and Interpreting Laboratory Proficiency Testing (see CGB/T15483.1-1999, 5.4.2) 1. Action Signals and Warning Signals
4.1.1 This standard provides some simple numerical or graphical criteria that will be applied to the data obtained from the proficiency testing to determine whether these data can trigger action signals or warning signals. Even in a well-run laboratory with experienced staff, abnormal data may occasionally be obtained; similarly, even if a standard measurement method has been verified as valid by pig reading tests, it may have defects that may only be revealed after multiple rounds of proficiency testing, and the proficiency testing plan itself may also have certain pitfalls. For the above reasons, the tests given here should not be used as a basis for punishing laboratories. If capability verification is used for penalty testing, it is necessary to use this as a "design appropriate" criterion.
4.1.2 The purpose of this criterion is that when the capability assessment standard deviation is based on observations (using any of the methods in 6, 6.5, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8.10, 8.11, 8.12, 8.13, 8.14, 8.15, 8.16, 8.17, 8.18, 8.19, 8.20, 8.21, 8.22, 8.23, 8.24, 8.25, 8.26, 8.27, 8.28, 8.29, 8.30, 8.41.3 The coordinator should understand the main sources of expected variation in the proficiency testing data. The first step in any analysis should be to examine the distribution of the measurement results to detect unexpected sources of variation. For example, the presence of a bimodal distribution indicates that the measurement results may be from a mixed population due to inaccurate measurement methods, sample contamination, or unclear test method guidelines. In this case, the relevant issues should be resolved before analysis and evaluation. The laboratory accreditation agency should first formulate a policy for dealing with unacceptable results in proficiency testing and then implement this policy or laboratory quality management methods to determine the specific steps. However, there are still general recommended steps when the laboratory produces unacceptable results in proficiency testing. Specific action guidelines are given in 1.1.4. In proficiency testing, when the proficiency assessment measurement results are obtained, if there are any unacceptable results, the laboratory should take appropriate corrective actions, if necessary, in consultation with the coordinator or laboratory accreditation body. Unless there is a justifiable reason, the laboratory should review its own procedures and confirm with the laboratory technician that maintenance measures will be taken to avoid such results recurring. The laboratory may ask the co-ordinator for possible reasons for the problem or ask the co-ordinator to consult other experts. The laboratory should participate in subsequent rounds of the proficiency testing program to evaluate the effectiveness of its corrective actions. Appropriate corrective measures may be one of the following: a) verifying that the relevant personnel are responsible and follow the measurement procedures; b) verifying that all details of the measurement procedures are correct; verifying the calibration of equipment and the separation of reagents; d) replacing the suspected equipment or reagents; e) performing comparative tests with another laboratory on personnel, equipment and/or reagents (see 115483.2-1999 for the use of proficiency testing results by laboratory accreditation bodies. 4.2 Limits on specified uncertainties The standard uncertainty x of a specified value depends on the method used to determine x and, when the value is obtained from the test results of the laboratory, it does not depend on the laboratory data or possible other factors. . Clause 5 gives a method for calculating the standard uncertainty of assigned values. Proficiency assessment standard deviation is used to assess the bias of a laboratory in a proficiency test. The method for calculating the standard deviation of the proficiency assessment is given in Clause 6 and its comparison with the estimate of laboratory bias is given in Clause 7. When the standard uncertainty of the assigned value is much greater than the proficiency assessment standard deviation used in the proficiency test, there is a risk that some laboratories will receive action or warnings because the assigned value is inaccurate, rather than for any reason internal to the laboratory. For this reason, the standard uncertainty of the assigned value should be determined and reported to the laboratories participating in the proficiency test program (see (13/E15483.1-[999, A, 1. 4 and A. 1. 6).
x 0. 0G
G8/T 2804320117680 13526:2005(1) When the uncertainty of the specified value is negligible and may not be included in the proficiency test results, the following points should be considered when the standard is not met: a) Find a method to determine the specified value so that the uncertainty of the specified value meets the requirements. b) Use the specified value in the interpretation of the proficiency test results (see 7.5 for E values or 7.6 for related values). c) Inform the participants of the proficiency test that the uncertainty of the specified value cannot be ignored. Example Assume that the specified value is equal to the average value of the test result. The value? Determine, the capability assessment standard is always the standard for this result! . As an estimator, you determine the uncertainty of the value. The general rule is: Although the above criteria are met, when the number of laboratories is less than 11, it is not satisfied. See the criterion: Further, the original is the most unstable, or there are factors that cause the laboratory results to be different (for example, "they interpret the reference standard between months"), the specified uncertainty will be greater than 14.3. Determination of the number of replicates
Unexpected variation is one of the reasons for the laboratory's research in capability testing. When the variation is much greater than the standard deviation of the capability testing, it may lead to irregular variation in the capability testing results. In this case, the laboratory may have a large error in one round of testing, but not in the next round. This is probably the reason. Therefore, when it is necessary to conduct repeated tests to eliminate the influence of the difference, the number of repeated tests for each experimental case in the proficiency verification should meet the following requirements: s, / /n 0. 3n
Among them, 6 is the more complex standard error determined by the laboratory experiment, - ( 2)
Use the coefficient of 0.3 as the standard. When the condition is met, the contribution of the repeatability standard to the quality of the evaluation standard (calculated as variance) is no more than 10%.
All laboratories should carry out the same number of repetitions. (The analytical method given in this standard assumes that this condition has been met. If the condition of (2) is not met, the number of repeated measurements should be increased, or the proficiency test results should be given a negative result. This method assumes that the laboratories have similar repeatability: when this condition is not met, other situations will occur. In this case, when the method specified in the standard is applied, the following techniques may be used. The coordinator can use the typical performance of the reproducibility standard to determine Chapter 15: The number of measurements, and then, each laboratory should check whether its own repeatability standard deviation meets the inequality (2). If not, it should adjust its measurement sequence (by increasing the number of tests, averaging the test results) to make the results meet the inequality (2) 4.4 Explanation of the uniformity and stability of the sample (see B/T15483.1-1999, 5.6, 2 and 5.6.3) The appendix gives the method for testing whether the samples for the proficiency certification have uniformity and stability in the test. When the preparation method of a sample to be evaluated cannot meet the uniformity test criteria in Appendix B, the participant should test repeated samples, or determine the proficiency assessment standard deviation. Homogeneity, as described in Appendix 1. 4.5 Definition of measurement method
The definition of the measurement method is closely related to the measurement method. For example, the particle size distribution of a particulate matter may be determined using a square hole sieve. It may not be possible to determine which sieving method is better, unless a certain sieving method is specified, and laboratories applying different sieving methods may reach different conclusions. If participants use different methods to determine the value of the particle size, their results may be biased if there is no fault in the measurement process.
If the measurement method is not selected, it will not produce any valid results. There is a way to overcome this problem:
The conventional standard adjustment method is selected to establish the value of the energy increase by this method. The tester should also follow this method in the proficiency verification.
GB/T28043-2011/1S013528:2005b) When the measured value is specified but the tester is not specified, a similar situation will occur and the same selection must be made. 4.6 Data reporting (see GB/T15483.1-1999.6.2.3) In order to meet the needs of the calculation of the test results in the proficiency verification, it is recommended that the input error of the single measurement result should not exceed 1/2. The tester should be required to report the actual value of the test result. Measurement results should not be expressed as interval values (for example, results should not be reported as "0." or "less than the detection limit"). Similarly, when an observation is negative, the actual negative value should be reported (for example, a result should not be reported as "0." even if it is logically possible that the result could be negative). Participants should be informed that if they report an interval value for a particular sample, or report a negative result as "0.", all data for that sample should be eliminated from the analysis. If necessary, the form used to report the results may include a column allowing participants to specify that the result is the lower detection limit.
4.7 Validity period of proficiency testing results
The validity period of results obtained in a single proficiency testing program is limited to the time of the current cycle of the program. Therefore, if the results are reported truthfully, the validity period of the results obtained in a single cycle of the proficiency testing program is limited to the time of the current cycle of the program. The fact that a laboratory has obtained a satisfactory result in a single round of the program does not mean that the experimental case can also obtain reliable data under other circumstances.
The laboratory that operates according to the quality inspection and has obtained satisfactory results in the passenger ship proficiency verification program should be deemed to be able to obtain continuous and reliable data.
5 Determination of designated value and its standard uncertainty
5.1 Selection of methods for determining designated value
The following methods for determining designated value X are described in 5.2 to 5.6. According to the requirements of (13/T15483.1: The coordinator shall be responsible for the selection of the method and follow the consultation results of the technical experts. When the number of laboratories participating in the program is small, 5, 5 and 5.6 may not be applicable. The method given in this standard for calculating the standard uncertainty of the assigned value is usually sufficient for the application in this standard. Of course, other methods may be used, provided that they have a sound statistical basis and are detailed in the proficiency testing plan document. The coordinator is responsible for the determination of the assigned value. The assigned value should not be disclosed to the participants until they have reported their experimental results to the coordinator. The coordinator should issue a report containing the following details: how the assigned value was obtained, the experiments involved in determining the assigned value, the traceability of the assigned value and the measurement uncertainty. JF1C59-1999 "Evaluation and Expression of Uncertainty in Measurement" gives guidance on the determination of measurement uncertainty. When a robust statistical method is the most appropriate method, this standard can recommend the use of this method (as shown in 5.5 and 5.6). Alternatively, a method that includes the detection and removal of outliers can be used, provided that it has a sound statistical basis and is documented. G13/T637S.2 gives guidance on the detection of outliers.
5.2 Formulation method (see GB/T 15483.1-1999, A.1.1, a) 5.2.1 General
The test object may be composed of a mixture of components in a specific ratio, or may be obtained by adding a certain ratio of components to the matrix. In this case, the specified value X is calculated from the maximum value of each component. It is very useful to prepare the test sample directly according to the method. In this case, as long as the proportion of the material components or the addition of the components is guaranteed, there is no need to prepare a collective sample, and the uniformity of the test material is also guaranteed: however, when the sample obtained by the formulation method is looser than the typical material, or presents a different form, other better methods should be used to obtain the sample. 5.2.2 Standard uncertainty of specified value tx When the specified value is calculated from the test material, the uncertainty components are used to synthesize the standard uncertainty according to JJF1059-1999 "Evaluation and indication of uncertainty in measurement", for example, in chemical analysis, the uncertainty is usually related to the weight. The limitation of the method (in chemical analysis) is that it is necessary to ensure that the following points are met: a) the sample is not affected by the added ingredients, or the proportion of the added ingredients in the sample is precisely known; b) all ingredients are mixed well (when this is required); all sources of error have been identified (for example, it is not recognized that glass can absorb the compound, and the viscosity of the aqueous solution of the compound will change with the container material); d) there is no reaction between the added ingredients and the base. 5.2.3 Example: Determination of cement content in silicate concrete In this example, the test of the ten components can be carried out by first weighing the components (water, sand and gravel) and then mixing them to form a mixed sample. As the exact composition of the prepared sample is known, this method is much better than determining the cement composition by analytical methods. 5.3 Certified reference values (see GB/T15483.1-1999, A.1.1, b) 5.3.1 General
When the material used in the proficiency testing is a certified reference material (C and M), the certified reference value of the material is entered as the specified value. Note 1: The certified reference value is also required to be a certified reference value. Note 2: Certified reference materials are sometimes certified standard products. 5.3.2 Standard uncertainty of the determined value
When a certified reference material is used as the material to be tested, the standard uncertainty of the determined value shall be given in the certificate. The limitation of this method is that it is expensive to provide each participant in the proficiency test with a certified reference material. 5.3.3 Note: L and value of aggregates
"1.A (IosAugeies) is a measure of the mechanical strength of aggregates used in road construction projects. It uses LA as the unit of measurement. In the process of determining the value of a certain standard material, the specific aggregate material is measured, and some samples are used to determine the value and its uncertainty. 28 laboratories participated. The specified value was x(2M21.62LA, the standard accuracy is xCR\0.25 1.A. The remaining aggregate group can be used for verification.
5.4 Reference (see /T5483.1-1999, A.1.1,) 5.4.1 General
First prepare the test materials to be made into reference materials (RM). Randomly select some samples and test them separately with CRM in the same laboratory using an advanced measurement method under any repeatability conditions (such as GB/T3358.4.1) : Compare with the certified reference of CRM The assigned value of the test material is obtained by calibration. 5.4.2 The standard uncertainty of the assigned value is given by the uncertainty of the certified reference value of the test result reagent CRM when the assigned value of the test material is obtained by a series of tests on the test material and the CRM. If the test material and the CRM are not similar (in terms of matrix (matrix), composition and level of results, etc.), this part of the uncertainty should also be considered when determining the standard uncertainty of the assigned value. The assigned value obtained by this method can be derived from the certified reference value of the CRM, the standard uncertainty is calculable, and the cost of distributing the CRM to all participants can be avoided, thus better than other methods. However, this method assumes that the test conditions have no interaction with the test material. The example in 5.4.3 illustrates the method of calculating the uncertainty when a non-test material is directly compared with the CRM to obtain its assigned value, such as when directly comparing it with the CRM.A. The remaining aggregates can be used for verification.
5.4 Reference (see /T5483.1-1999, A.1.1,) 5.4.1 General
First prepare the test materials to be made into reference materials (RM). Randomly select a portion of the sample and test them separately with the CRM in the same laboratory using an advanced measurement method under any repeatability conditions (such as GB/T3358.1.1) : calibrate against the certified reference of the CRM to obtain the specified value of the test material. 5.4.2 The standard uncertainty of the specified value is x
When the specified value of the test material is obtained by measuring the test material and the CRM, the specified standard uncertainty is obtained by measuring the test result of the test material and the CRM. If the test material and the CRM are not similar (in terms of matrix (matrix), composition structure and level of results, etc.), this part of the uncertainty should also be considered when determining the standard uncertainty of the specified value. This method should be used to obtain the specified value, which can be derived from a certified reference value of the RM, so the standard uncertainty is calculable and the distribution of the CRM to all participants can be avoided, thus being better than other methods. However, this method assumes that the test conditions have no interaction with the test material. The example in 5.4.3 shows that when a non-test material is directly compared with a single RM to obtain its specified value, such as when calculating the uncertainty method,A. The remaining aggregates can be used for verification.
5.4 Reference (see /T5483.1-1999, A.1.1,) 5.4.1 General
First prepare the test materials to be made into reference materials (RM). Randomly select a portion of the sample and test them separately with the CRM in the same laboratory using an advanced measurement method under any repeatability conditions (such as GB/T3358.1.1) : calibrate against the certified reference of the CRM to obtain the specified value of the test material. 5.4.2 The standard uncertainty of the specified value is x
When the specified value of the test material is obtained by measuring the test material and the CRM, the specified standard uncertainty is obtained by measuring the test result of the test material and the CRM. If the test material and the CRM are not similar (in terms of matrix (matrix), composition structure and level of results, etc.), this part of the uncertainty should also be considered when determining the standard uncertainty of the specified value. This method should be used to obtain the specified value, which can be derived from a certified reference value of the RM, so the standard uncertainty is calculable and the distribution of the CRM to all participants can be avoided, thus being better than other methods. However, this method assumes that the test conditions have no interaction with the test material. The example in 5.4.3 shows that when a non-test material is directly compared with a single RM to obtain its specified value, such as when calculating the uncertainty method,
Tip: This standard content only shows part of the intercepted content of the complete standard. If you need the complete standard, please go to the top to download the complete standard document for free.