title>Proficiency testing by interlaboratory comparisons--Part 1:Development and operation of proficiency testing schemes - GB/T 15483.1-1999 - Chinese standardNet - bzxz.net
Home > GB > Proficiency testing by interlaboratory comparisons--Part 1:Development and operation of proficiency testing schemes
Proficiency testing by interlaboratory comparisons--Part 1:Development and operation of proficiency testing schemes

Basic Information

Standard ID: GB/T 15483.1-1999

Standard Name:Proficiency testing by interlaboratory comparisons--Part 1:Development and operation of proficiency testing schemes

Chinese Name: 利用实验室间比对的能力验证 第1部分:能力验证计划的建立和运作

Standard category:National Standard (GB)

state:Abolished

Date of Release1999-03-08

Date of Implementation:1999-09-01

Date of Expiration:2013-07-01

standard classification number

Standard ICS number:Sociology, Services, Organization and Management of Companies (Enterprises), Administration, Transport>>Quality>>03.120.20 Product Certification and Agency Certification, Conformity Assessment

Standard Classification Number:General>>Standardization Management and General Regulations>>A00 Standardization, Quality Management

associated standards

alternative situation:GB/T 15483-1995

Procurement status:≡ISO/IEC 43-1-97

Publication information

publishing house:China Standard Press

Publication date:1999-09-01

other information

Release date:1995-02-14

Review date:2004-10-14

drafter:Liu Anping, Zhai Peijun, Mao Zuxing, Li Renliang, Wang Xuewen, Shi Changyan, Liu Zhimin, Jia Li

Drafting unit:China National Accreditation Service for Laboratories

Focal point unit:China National Accreditation Service for Laboratories and China Institute of Standardization and Information Classification and Coding

Proposing unit:China National Accreditation Service for Laboratories

Publishing department:State Administration of Quality and Technical Supervision

competent authority:National Standardization Administration

Introduction to standards:

This standard applies to operators and users of proficiency testing, such as participating laboratories, accreditation bodies, statutory bodies, and clients who need to assess the technical capabilities of laboratories. GB/T 15483.1-1999 Proficiency testing using inter-laboratory comparisons Part 1: Establishment and operation of proficiency testing schemes GB/T15483.1-1999 Standard download decompression password: www.bzxz.net
This standard applies to operators and users of proficiency testing, such as participating laboratories, accreditation bodies, statutory bodies, and clients who need to assess the technical capabilities of laboratories.


Some standard content:

GB/T 15483.1 1999
This standard is equivalent to ISO/IEC Guide 431 (1997 Edition) Proficiency testing using interlaboratory comparisons - Part 1: Establishment and operation of proficiency testing plans.
CB/T15483-1999 includes the following two parts under the general title "Proficiency testing using interlaboratory comparisons": -GB/T15453.1-1999 Proficiency testing using interlaboratory comparisons - Part 1: Establishment and operation of proficiency testing plans
-GB/T15483.2-1999 Proficiency testing using interlaboratory comparisons: Part 2: Selection and use of proficiency testing plans by laboratory accreditation bodies.
This standard is a revision of GB/T15483-1995 Development and Operation of Laboratory Comparison Tests. It emphasizes the operation of inter-laboratory comparisons based on proficiency testing (most of the principles are also applicable to inter-laboratory comparisons of other days). Compared with B/T15483-1995, this standard has been modified and supplemented in the following aspects: The name of the standard has been changed from "Development and Operation of Laboratory Comparison Tests" to the current name of proficiency testing using inter-laboratory comparisons:
In terms of form and content, GB/T15483-1999 is divided into two parts: G1/T15483.1 and GH/T15483.2. Compared with GB/T15483-199, this standard adds Appendix A: Examples of Statistical Methods for Processing Proficiency Verification Data, Appendix B: Quality Management of Proficiency Verification Plans, and Appendix C: Bibliography. It also adds descriptions of elements to make this standard more comprehensive and more operational.
Appendix A, Appendix B and Appendix C of this standard are all indicative appendices. This standard and GB/T15483.2-1999 replace GB/T15483-1995 from the date of implementation. Technical standards are provided by the National Accreditation Committee for Laboratory Cases of China. This standard is under the jurisdiction of the China National Accreditation Committee for Laboratories and the China Institute of Standardization and Information Classification and Coding. This standard was drafted by the China National Accreditation Committee for Safety Testing Laboratories and the China Institute of Standardization and Information Classification and Coding. The main drafting units of this standard are: China National Accreditation Committee for Laboratories, China Institute of Standardization and Information Classification and Coding, China National Institute of Metrology, National Standard Material Research Center, China National Petroleum Corporation. The main drafters of this standard are: Liu Anping, Zhai Peijun, Mao Zuxing, Li Renliang, Wang Xuewen, Shi Wuyan, Liu Zhihui, Gui Li. GB/T 15483.1-1999
ISO/IEC Foreword
ISO (International Organization for Standardization) and IFC (International Electrotechnical Commission) are specialized systems formed for the purpose of global standardization. National organizations that are members of ISO or IFC participate in the formulation of international standards through technical committees composed of various organizations engaged in specific technical fields. ISO and IEC technical committees cooperate in areas of common interest. Sometimes other governmental and non-governmental international organizations, through liaison with ISU and IEC, also participate in this work. ISO/IEC Guide 43-1 was written by ISO/CASCO Ad HOC (Conformity Assessment Committee Expert Group) and is a revision of ISO/TEC Guide 43. The first draft was first circulated and commented on by members of (ASCO (Conformity Assessment Committee) and IEC National Committees. It was finally approved by ISO/CASCO and submitted to the 1EC Committee. It will be published as an ISO/IEC Guide. Parts 1 and 2 of ISO/IEC Guide 43:1997 will replace ISO/IEC Guide 43:1984. ISO/IEC Guide 13:1984 is a guide to the development and implementation of laboratory proficiency testing: but it does not emphasize the use of proficiency testing results by accreditation bodies. ISO/IEC Guide 43:1997 will provide guidance on a) how to distinguish between inter-laboratory comparisons for proficiency testing and comparisons of laboratories using other laboratories; b) the establishment and operation of proficiency testing schemes using inter-laboratory comparisons; c) the selection and use of proficiency testing schemes by laboratory accreditation bodies. ISO/IEC Guide 43:1997, under the general title "Proficiency testing using inter-laboratory comparisons", contains the following parts: - Part 1: Establishment and operation of proficiency testing schemes; Part 2: Selection and use of proficiency testing schemes by laboratory accreditation bodies. The appendix to ISO/IEC Guide 43.1997 provides statistical guidance on the treatment of proficiency testing program data and guidance on the preparation of proficiency testing program operating documents (quality manuals): GR/T 15483. 1--1999
There are many other uses for laboratory comparisons, which can be used by participating laboratories and other organizations. For example, inter-laboratory comparisons can be used to:
a) determine the ability of a laboratory to perform certain specific tests or measurements, and to monitor the laboratory's continued ability;
identify problems in the laboratory and develop appropriate remedial measures - these measures may involve, for example, the behavior of individuals or instrument calibration;
c) determine the validity and reliability of new test and measurement methods + the appropriate control of these methods! d) increase the confidence of the laboratory in the use of the test;
) identify differences between laboratories;
f) determine the performance characteristics of a method, usually called collaborative testing; g) assign values ​​to reference materials (RMs) and evaluate their suitability for use in a specific test or measurement procedure. Proficiency testing is the comparison of laboratories to achieve the purpose of a, that is, to determine the test or measurement ability of the laboratory. However, the operation of proficiency testing schemes often also provides information for the other purposes listed above. Participation in proficiency testing schemes provides laboratories with an objective means of assessing and demonstrating the reliability of their data. Although there are many types of proficiency testing schemes (see Chapter 4), most of them have the common purpose of comparing the test and measurement results of two or more laboratories.
One of the main uses of proficiency testing schemes is to evaluate the ability of a laboratory to perform tests competently. This can include assessments by the laboratory itself, the laboratory's capacity, other bodies such as accreditation or statutory bodies. It is a way to supplement the laboratory's internal quality control procedures with external measures of the laboratory's testing ability. These activities also complement the technique of on-site laboratory assessments by technical experts (on-site assessments are often used by laboratory accreditation bodies). For laboratory users, it is very important to trust that a testing or calibration laboratory can continue to produce reliable results. To seek such assurance, users may have their own proficiency testing programs evaluated or use the evaluation results of other organizations. This standard emphasizes the operation of narrow laboratory comparisons for the purpose of proficiency testing, but some of the principles and guidelines are also applicable to the operation of narrow laboratory comparisons for other purposes. Many laboratory accreditation bodies implement their own proficiency testing programs, and quite a number of laboratory accreditation bodies choose to use proficiency testing programs or other forms of inter-laboratory comparisons implemented by other organizations. (B/15483.2 is intended to provide coordination principles for laboratory accreditation bodies to select appropriate narrow laboratory comparisons when implementing proficiency testing programs. Some organizations that evaluate the technical capabilities of laboratories require or expect satisfactory results in proficiency testing programs as an important basis for the laboratory's ability to provide reliable results (Chen Fei Neng Ting's improper verification). However, it should be emphasized that there are significant differences between the following two situations: a) According to specific requirements, the laboratory's capabilities are evaluated by reviewing the overall operation of the laboratory. b) The results of the laboratory's proficiency testing may only be regarded as information on the laboratory's technical capability under the conditions of one (or more) stable measurements determined by a specific proficiency testing program. This standard refers to some guidance documents related to proficiency testing developed by JLAC (International Laboratory Accreditation Cooperation), ISO/TC69 (Technical Committee on Statistical Methods), ISO/RFM (International Reference Material Committee), IUFAC (International Union of Pure and Applied Chemistry), AOAC (American Association of Chemical Engineers), ASTM (American Society for Testing and Materials), and the revised WECC and WFI.AC (Western European Calibration Cooperation, Western European Laboratory Accreditation Cooperation, now jointly evaluated as European Laboratory Accreditation Cooperation EAL). 1 Scope
National Standard of the People's Republic of China
Proficiency Testing Using Simple Comparisons of Laboratories
Part 1: Establishment and Operation of Proficiency Testing Schemes testing by interlaboratory comparisons-Part 1: Development and operation ofpraficiency testing schemes
GB/T 15483.1—1999
B/T 15183—1995
Although there are many uses for interlaboratory comparisons and they vary in design and implementation, it is possible to develop basic principles that should be considered in organizing comparisons. This standard sets out these principles and describes the factors that should be considered in organizing and operating proficiency testing programs. CB/T 15483.2 describes how proficiency testing programs should be selected and used by laboratory accreditation bodies that evaluate the technical capabilities of laboratories: This standard is applicable to operators and users of proficiency testing, such as participating laboratories, accreditation bodies, statutory bodies, and clients who need to assess the technical capabilities of laboratories. It is particularly useful for self-assessed laboratories. However, it should be recognized that proficiency testing is only a mechanism to help establish mutual trust between users of different testing laboratories. The current status of some accreditation bodies is that laboratories participate in accepted proficiency testing programs on a regular basis for their own laboratories. It is therefore essential that operators of these programmes adhere to the principles of technical requirements, statistical procedures (see examples in Annex A), and quality management (see guidance in Annex B) when operating professionally managed proficiency testing programmes. It cannot be expected that different proficiency testing organisations will operate in exactly the same way. This International Standard does not give specific details of the operation of inter-laboratory comparisons. The content of the Standard is intended only as a framework that can be adapted to the circumstances, and is applicable to programmes with a small number of participants as well as to programmes with a large number of participants. This International Standard does not include a technique often used by some organisations, namely the assessment of the competence of a single laboratory by the distribution of certified reference materials or other well-characterised materials. Annex C lists a list of references. 2 Normative References The following standards contain provisions which, by reference in this Standard, constitute provisions of this Standard. The versions indicated were valid at the time of publication of this Standard. All standards are subject to revision and parties using this Standard should investigate the possibility of applying the latest versions of the following standards. GB/T15481-1995 General requirements for the competence of calibration and testing laboratories GB/T15483.21999 Proficiency testing using inter-laboratory comparisons Part 2: Selection and use of proficiency testing plans by laboratory accreditation bodies
JF 1001-~1998 General metrological terms and definitions I503534-1:1993 Statistical vocabulary and symbols Part 1: Probability and general statistical terms 1505725.1:1994 Accuracy (truthfulness and precision) of measurement methods and measurement results Part 1: General principles and definitions IS0 5725-2:1994 Accuracy (authenticity and precision) of measurement methods and measurement results Part 2: Basic methods for determining the repeatability and reproducibility of a standard measurement method Approved by the State Administration of Quality Supervision, Inspection and Quarantine on March 8, 1999 and implemented on September 1, 1999 GB/T15483.1-1999 ISO 5725-4:1994 Accuracy (authenticity and precision) of measurement methods and measurement results Part 4: Basic methods for determining the authenticity of a standard measurement method ISO 9000 Standardization Management System, 1994 ISO/ESA Guide 2:16 General terms and definitions for standardization and related activities Standardization Guidelines for the Expression of Uncertainty in Measurement: 1993, approved by BIPM (International Bureau of Weights and Measures), I FCIFCC, ISO, IUPAC (International Union of Pure and Applied Chemistry) IUPAP (International Union of Theoretical and Applied Physics) OIML (International Organization of Legal Metrology) Note: Published by Standardization Chemical Abstracts Magazine Press, 1995, International Harmonized Settings for Proficiency Testing of (Chemical) Analytical Laboratories. Journal of A (A international. 76, Na. 4. 1993. 926-940
Evaluation of Matrix Validity: Recommended Guidelines. NCCILS (National Committee for Clinical Laboratory Standards) Document FP-14P, Villnnoe, PA, 1994
3 Definitions
This standard adopts the following definitions, some of which adopt the definitions of relevant international/national standards or ISO/IEC Guide 4. 3. 1 test
technical operation consisting of determining one or more characteristics of a given product, process or service according to a specified procedure, (ISO/IEC Guide 2:1996)
3.2 testing laboratorytesting laboratoryLaboratory engaged in testing. (ISO/IEC Guide 2:1996) Note: The term "testing laboratory" may be used in a legal sense or in a technical sense, or in both. 3.3 test itemtest item
material or product submitted for proficiency testing. 3.4 test methodtest method
technical procedure specified for testing. (ISO/IEC Guide 2:1996) 3.5 test resulttest. result
the value of a characteristic obtained in strict accordance with a specified measurement method: (ISO/IEC Guide 2:1996) 3.6 (laboratory> capability verification《laboratoryatary》) proficiencytestingThe use of inter-laboratory comparisons to determine the testing capability of a laboratory. (ISO/IEC Guide 2; 1996) NOTE: In this International Standard, the term "laboratory proficiency testing" has a very broad meaning. It includes, for example: a) qualitative schemes: - for example, requiring the laboratory to identify the basic components of the test item; b) data transfer exercises: - for example, providing the laboratory with multiple sets of data and requiring them to be processed to obtain improved results; e) single item testing: - a single item is sent to several laboratories in sequence and returned to the organizer at the same time; d) one-off exercise: sending a single test item to a single laboratory; e) continuous schemes: sending test items to the laboratory continuously at specified time intervals; f) sampling: for example, requiring individuals to draw samples for subsequent analysis. 3.7 Interlaboratory comparisons Organization, implementation and evaluation of tests on the same or similar test items by two or more laboratories under pre-specified conditions. Note: In some cases, one of the laboratories participating in the comparison can be the laboratory that provides the specified value of the test item. 3.8 Reference material/standard material reference material (RM) A material or substance with one or more sufficiently uniform and well-defined properties used to calibrate measuring devices, evaluate measurement methods or assign values ​​to materials. (6.13 in JJF 1001-1998) 3.9 Certified reference material/certified standard material (CRM) A reference material with a certificate, with one or more properties, traceable to an accurately reproduced GB/T 15483. 1-1999
The measurement unit of the value is expressed, and each certified characteristic value is accompanied by an uncertainty with a given confidence level. (JJF100--109846.14)
3.10 Reference laboratory A laboratory that provides reference values ​​for a test item, such as a national calibration laboratory.
3. Designated value A value with an appropriate uncertainty for a specific quantity in a given month, sometimes adopted by agreement. (Note (1) and (2) in I.2ut (the most) agreed point value convergent value (a quantity) in JJF10011998) 3.12 Traccability
The property of being able to relate the value of a measurement result or measurement standard to a prescribed reference standard (usually a national measurement standard or an international measurement standard) through an unbroken chain of comparisons with a given uncertainty: (6.10 in JJF1001-1998) 3.13 Coordinator
The organization (or person) responsible for coordinating all activities in the operation of a capability verification project. 3.14 Trueness
The degree of agreement between the mean value of a series of test results and the accepted reference value. (5%) 3534-1: 1993) 3.15 precision precision
the degree of agreement between independent test results obtained under predetermined conditions. (IS035341: 10933.16 outlier
- a value in a group of values ​​that is inconsistent with other values ​​in the data. (IS0) 57251, 1994) 3.17 extreme results extreme results outliers and other values ​​that are extremely inconsistent with other values ​​in the data. Note: These results may have a significant impact on overall statistics such as the mean and standard deviation. 3.18 robust statistical techniques robustsiarisicallerhniques techniques that minimize the influence of extreme results on the estimates of the mean and standard deviation: these techniques give less weight to extreme results rather than removing them from the data. 3.19 uncertainty of measurement parameter associated with the measurement result that characterizes the certainty that can be reasonably attributed to the value of the measurand. (JJF1001-1998t3.9) 4 Types of proficiency testing
4.1 General proficiency testing techniques vary according to the nature of the test item, the method used and the number of participating laboratories. Most proficiency testing programs have a common feature, that is, the results obtained by one laboratory are compared with those obtained by one or more other laboratories. In some programs, the participating laboratories may also have control, coordination or reference functions. The following are general types of proficiency testing programs: 4.2 Measurement comparison program
The test or calibration items involved in the measurement comparison program are transferred from one reference laboratory to the next in sequence. These programs usually have the following characteristics:
) The specified value of a test item is provided by a reference laboratory, which may be the highest authority in the country for measurement. It may be necessary to calibrate the test item at a specific stage during the implementation of the proficiency testing process to ensure that there is no significant change in the specified value throughout the proficiency testing process.
h) Completing a proficiency testing program in sequence is time-consuming (sometimes taking years). This can cause some difficulties. For example: ensuring the stability of the items; strictly monitoring the delivery of items and the measurement time allowed by the participants; the laboratory's proposed performance needs to be fed back to the test basket at a certain time during the implementation rather than waiting until the end of the program. In addition, group-based comparison of results may be difficult because there may be relatively few laboratories with similar measurement capabilities.
GB/T 15483, 1 -1999
c) Each measurement result is compared with a reference value determined by the reference laboratory. The coordinator should take into account the measurement uncertainty claimed by each participating laboratory.
d) Examples of items (measurement articles) used for such proficiency testing include reference standards (such as resistors, gauges and instruments). 4.3 Interlaboratory Testing Programs
Interlaboratory testing programs involve randomly selecting grade samples from the material source and distributing them to the participating laboratories for joint testing. This technique is sometimes also used in interlaboratory measurement comparison programs. After the tests are completed, the results are returned to the coordinating body for comparison with the specified values ​​to indicate the performance of the individual laboratories and the group as a whole. Examples of items used for this type of proficiency testing include foods, body fluids, water, soil and other environmental materials. In some cases, the test items distributed are separate portions of previously established (certified) reference materials: the entire batch of test items provided to participants in each round of comparison must be sufficiently homogenous so that any extreme results identified later cannot be due to significant variation in the test items. (See 5.6.2 and A4 in Appendix A) Accreditation bodies, statutory bodies and other organizations usually use this type of interlaboratory testing program when applying proficiency testing in the field of testing. A commonly used interlaboratory testing plan is the "split level" design, in which two separate test items have similar (but not identical) levels of the measured value. This design is used to estimate the precision of the laboratory at a specific level of the measured value. It avoids the problems associated with using the same test item as a repeated test basis or including two identical test items in the same round of proficiency testing. 4.4 Split Sample Testing Plan
A specific form of proficiency testing that is often used by users, including some statutory bodies, is the split sample testing technique. (Do not confuse it with the split level plan in 4.3.) The comparison data for a typical split sample testing plan are provided by a group of a small number of laboratories (usually only two laboratories) who will serve as potential or Providers of continuous testing services are evaluated. Similar comparisons are often used in commercial transactions. In this case, samples representing the traded goods are split between a laboratory representing the supplier and another laboratory representing the buyer. If significant differences in the results issued by the supplier and buyer laboratories require arbitration, the other sample is usually retained for testing in a third-party laboratory. A split sample testing program involves dividing a sample of a product or material into two or more parts, and each participating laboratory tests one part of each sample. Unlike the type of proficiency testing described in Section 4.3, a split sample testing program usually involves only a very limited number of laboratories (often two). The uses of this type of testing include identifying poor precision, describing consistency deviations, and verifying the effectiveness of positive measures. This Programs often require that sufficient material be retained for further analysis by another laboratory to resolve differences found among a limited number of laboratories.
Similar split-sample testing techniques are also used to monitor clinical and environmental laboratories. Typically, these programs involve comparing the results of a number of split samples from one laboratory with those of one or more other laboratories over a wide range of concentrations. In some programs, one of the laboratories can be considered to be operating at a higher metrological level (i.e., lower uncertainty) because it uses standard methods, more advanced equipment, etc., and its results are therefore considered to be reasonable in these comparisons. The laboratory can serve as a consulting laboratory or guide laboratory for other laboratories participating in the split-sample data comparison. 4.5 Qualitative Programs| |tt||Evaluation of laboratory performance does not always involve interlaboratory comparisons (see note a) in 3.6). For example, some programs are designed to evaluate a laboratory's ability to characterize a specific physical object (e.g. identifying types of asbestos, specific pathogenic organisms, etc.). Such programs may involve the program coordinator preparing test articles with added target components. Therefore, these programs are "qualitative" in nature and do not require the participation of multiple laboratories or the evaluation of a laboratory's performance through interlaboratory comparisons. 4.6 Known Value Programs
Other special types of proficiency testing programs may involve the preparation of test articles to be tested, with known values ​​of the measurand, so that it is possible to evaluate the ability of a laboratory to test that item and provide a numerical result that is compared to a specified value. Again, such proficiency testing does not require the participation of many laboratories.
4.7 Partial Process Programs
GB/T15483.1-1999
Some special types of proficiency testing involve the evaluation of the laboratory's ability to complete a number of parts of the entire process of testing or measurement. For example, some existing proficiency testing schemes assess the ability of a laboratory to convert and report a given set of data (rather than to actually perform the test or measurement), or to draw and prepare samples or test pieces according to specifications. 5 Organization and design
5.1 Specifications
5.1.1 The design phase of any proficiency testing program requires the involvement of technical experts, statistical experts, and a program facilitator to ensure the success and smooth operation of the program.
5.1.2 In consultation with these experts, the facilitator should develop a plan that is appropriate for a specific proficiency testing program. The design of a proficiency testing program should avoid unclear objectives. Before the programme is initiated, its detailed plan should be agreed and documented (see Appendix), which should generally include the following information: a) the name and address of the organization implementing the validation plan; b) the name and address of the coordinator and any experts involved in the design and implementation of the validation plan; c) the nature and purpose of the validation plan; d) the procedure for selecting participants, or the criteria to be met for admission, where appropriate; e) the names and addresses of the laboratories participating in the programme (parts of which, such as sampling, sample handling, homogeneity testing and value assignment).f) the nature of the test items selected and the nature of the tests to be performed, and a brief description of how these selections were made; g) a description of the methods for obtaining, handling, calibrating and shipping the test items; h) a description of the information provided to participants during the notification phase, and a description of the scheduling of the various phases of the proficiency testing program: i) the expected start and end dates of the proficiency testing program, including the dates on which participants will perform the tests; ) for an ongoing program, the frequency of distribution of test items; k) information on the methods or procedures that participants may need to use to perform the tests or measurements (usually their routine procedures); 1) an overview of the statistical analyses used, including the determination of assigned values ​​and the detection of outliers; m) a description of the data or information returned to participants; n) the basis for the proficiency evaluation techniques;
0) a description of the public interpretation of the test results and the conclusions drawn from the proficiency testing results. 5.2 Staff
5.2.1 Personnel involved in the development of the programme should have adequate qualifications and experience in the design, conduct and reporting of interlaboratory comparisons, or should be able to work closely with those who have such competence. These qualifications and experience should include appropriate technical skills, statistical skills and management skills. 5.2.2 As mentioned in 5.1.1, the operation of these specific interlaboratory comparisons also requires the guidance of personnel with detailed technical knowledge and experience in the methods and data involved. For this purpose, the coordinator may need to list one or more appropriate persons to form an advisory team, who may be selected from, for example, professional bodies, contract laboratories (if any), and the ultimate users of the data of the programme participants. 5.2.3 The role of the advisory group may include:
a) developing and evaluating the procedures for planning, execution, analysis, reporting and effectiveness of the proficiency testing scheme; b) identifying and evaluating inter-laboratory comparisons organized by other organizations; c) evaluating the proficiency testing results related to the competence of participating laboratories; d) providing advice to any body assessing the technical competence of the laboratory on the results obtained from the proficiency testing scheme and how these results should be used in conjunction with other aspects of the laboratory evaluation; c) providing advice to participants who encounter problems; f) resolving disputes between the coordinator and participants. 5.3 Data processing equipment
Whatever equipment is used, it should be able to input all necessary data, perform statistical analysis and provide timely and valid results, verify that the data input procedures are implemented, and all software is verified, supported and backed up, including the storage and security of the data. 5.4 Statistical design
5.4.1 The statistical methods used should be documented and the data analysis techniques should be briefly described with regard to their selection. General statistical procedures and the treatment of proficiency testing data are discussed in the Appendix. 5.4.2 It is essential to have an appropriate statistical design for the proficiency testing scheme. The following items and their interactions should be considered: a) The precision and authenticity of the tests involved; b) The minimum difference between participating laboratories to detect with the required confidence level; d) The number of participating cases; e) The number of samples to be tested and the number of replicates or tests to be performed on each sample; e) The procedure used to estimate the assigned value; and d) The procedure used to identify outliers.
5.4.3 In the absence of reliable information in a), it may be necessary to organize an interlaboratory comparison (collaborative testing) in certain circumstances to obtain this information.
5.5 Preparation of test items
5.5.1 The preparation of test items may be outsourced or undertaken by a coordinator. The organization that prepares the test items should demonstrate that it has the ability to do so. 5.5.2 Any conditions related to the test items that could affect the integrity of the interlaboratory comparison, such as homogeneity, qualitative sampling, possible damage during transportation and the influence of ambient conditions (see 5 and 6), should be taken into account. 5.5.3 The test items or materials planned for distribution should generally be similar in nature to the test items or materials used in the laboratory. Note: Only those that are homogeneous can be recorded (NCCL confidential). The VCCLS1 document (1091 F, Villanova, PA: Villanova) gives the example of "recommended" drug groups. 5.5.4 The number of test items distributed will depend on the number of test items required to cover a certain component. 5.5.5 The specified values ​​should not be disclosed to participants before the results are verified. However, in some cases it may be appropriate to inform the participants of the specified range before testing. 5.5.6 In addition to the test items required for the proficiency testing program, it may be considered to prepare additional test items. After evaluating the results obtained by the participants, the remaining test items may be used as laboratory testing materials, quality control materials or training products. 5.6 Handling of test items 5.6.1 Automation of the sampling, delivery, receipt and Procedures for identification, labelling, storage and disposal should be documented. 5.6.2 When new materials are prepared for proficiency testing, they should be sufficiently homogenised for each test item so that all subjects receive test items with no significant differences in the quantities being measured. The investigator should document procedures for establishing the homogeneity of test items (see Appendix A, Section A). Whenever possible, homogeneity testing should be performed before the items are distributed to participating studies. Homogeneity should be demonstrated to ensure that differences in the preparation of test items will not have a significant impact on the evaluation of the results by the subjects: 5.6.3 Whenever possible, the coordinating organization should provide evidence to ensure that the test item scores are stable throughout the proficiency testing process and that no significant changes have occurred. When it is necessary to assess whether the test items will affect certain measured items, it may be necessary for the coordinating organization to specify the period for completion of the testing, as well as any special requirements. 5.6.4 The coordinator should consider the possible hazards posed by the test items and take appropriate measures to inform any relevant parties who may be exposed to potential hazards (e.g. test item distributors, testing laboratories, etc.). 5.7 Choice of method/procedure 5.7.1 Participants can usually use the method of their choice - the method is consistent with the commonly used procedures. However, in some cases, the coordinator may instruct participants to use a specific method. These methods are often nationally or internationally adopted standard methods and have been confirmed by appropriate procedures (e.g. testing). 5.7.2 When applying calibration procedures, the default values ​​are usually obtained by measurement using a well-defined and recognized procedure in a high-level calibration laboratory (usually a national standards laboratory). It is desirable that all test cases use the same or similar procedures, but this is not always feasible for a pool of laboratories:
GB/T 15483.1--1999
5.3 Where participants are free to choose the method they use, the coordinator should, where appropriate, request details of the method they use to facilitate comparison of the results of other participants and to comment on the method. 5.8 Development of proficiency testing schemes
In order to ensure that proficiency testing schemes can adapt to technological and scientific developments that require the introduction of new types of samples or new methods and procedures, caution should be exercised when making early conclusions about the performance of individual laboratories based on the results of such schemes. (See 64.5). 6 Operation and reporting
6.1 Coordination and documentation
The coordinator should be responsible for ensuring that the scheme is operated on a day-to-day basis. All activities and processes should be documented. These documents may be included in or supplemented by a quality manual. (See Annex B). 6.2 Instructions
6.2.1 Instructions are detailed instructions to participating laboratories on all aspects of the programme. For example, these instructions may form an integral part of a programme agreement. 6.2.2 The factors that may affect the testing of a given test item or material shall be described in detail. These factors include the operator, the nature of the item or material, the condition of the equipment, the choice of test procedure and the testing schedule. 6.2.3 Specific guidance may also be provided on the recording and reporting of test and calibration results (e.g. units, number of significant figures, reporting format, result deadlines, etc.).
6.2.4 Participants shall be informed to handle proficiency test items as if they were H-band testing (unless there are special requirements in the proficiency test design that may deviate from these principles).
6.3 Packaging and transport
The programme coordinator shall consider the following aspects of the distribution of test or measurement items. The packaging and transport methods must be appropriate and protect the stability and durability of the test items. Dangerous goods regulations or customs requirements may impose some restrictions on transportation. In some cases, especially in sequential measurement comparison programs, the laboratory itself may also be responsible for shipping the items. All relevant customs telexes should be completed by the coordinator to ensure that delays in customs clearance are minimized. The program should comply with national and international regulations applicable to the transportation of test items. 6.4 Analysis and recording of data
6.4.1 Results received from participating laboratories should be entered and analysed, and returned to the laboratory as soon as practicable. It is essential that procedures be used to check the validity of data entry, transmission and subsequent statistical analysis (see 5.3). It is recommended that data spreadsheets, computer backup files, printouts and plots be retained for a specified period of time.
6.4.2 Data analysis should produce aggregate measure values, performance statistics and information that correlates with the planned statistical model and objectives: outlier detection tests should be used to identify and then eliminate, or preferably robust statistics should be used to minimise the impact of outlier results on the aggregate measure. General suggestions for statistical evaluation are included in Annex A. 6.4.3 The programme coordinator should have documented criteria for dealing with test results that are not suitable for proficiency assessment, for example, not grading or giving a grade when the test material appears to be insufficiently homogeneous or stable for the purpose of proficiency testing. 6.5 Proficiency testing report
6.5.1 The content of the proficiency testing program report varies according to the purpose of the specific program, but should be clear and complete and include data on the distribution of results of all laboratories, as well as a description of the capabilities of each participant (see 6.6). 6.5.2 The following information should normally be included in the proficiency testing program report: a) Name and address of the organization implementing or cooperating with the program; 6) Names and units of the personnel involved in the design and implementation of the program (see 5.2); c) Date of issue of the report; d) Report number and clear program identification; e) Clear description of the items or materials used, including details of sample preparation and homogeneity testing; f) Participating laboratory code and test results; GB/T 15483. 1—1999
) statistical data and overview, including assigned values ​​and ranges of acceptable results; b) procedures used to determine assigned values:
i) details of the origin and uncertainty of any assigned values; bZxz.net
) assigned values ​​and summary statistics determined for the test methods/procedures used by other participating laboratories (if different methods are used for different test cases);
k) evaluation of the laboratory's capabilities by the coordinator and technical supervisor; 1) procedures for designing and implementing the program (which may include reference to the program protocol); m) procedures for statistical analysis of data using F (see Annex A); ) where appropriate, suggestions for interpreting statistical analyses. 6.5.3 For some programs that are carried out on a regular basis, a simple report may be sufficient. Many of the items mentioned in 6.5.2 can be omitted from regular reports, but they should be included and implemented in interim summary reports and when requested by participants. 6.5.4 The report should be made available as soon as possible within the specified time. Although, ideally, all the original data provided should be reported to participants: this may not be possible in some large programs. Participants should receive at least the results of all experiments in summary form (e.g. graphically). In some programmes, such as long-term test comparison programmes, interim reports should be sent to individual participants. 6.6 Competence evaluation
6.6.1 Where competence evaluation is required, the coordinator shall be responsible for ensuring that the methods of evaluation are appropriate to maintain the credibility of the programme. 6.6.2 Facilitators may seek the assistance of a technical advisor to provide expert review of the following aspects of laboratory capability: a) comparison of overall performance with previous expectations (with appropriate uncertainty); b) within-laboratory and between-laboratory variation (and comparison with previously planned or published precision data); c) differences between methods or procedures, if applicable; d) possible sources of error (i.e. extreme results) and suggestions for improving capability; e) any other suggestions, recommendations or general comments; f) conclusions.
6.6.3 During or after a particular programme, it may be necessary to periodically provide participants with various summary tables. For an ongoing programme, these tables may include an updated summary of the performance of each laboratory in each round. Such summaries may be further disaggregated and any significant trends indicated if requested.
6.6.4 There are various procedures for reviewing the performance of participants, either after the completion of a single programme or after the completion of rounds of a continuing programme. An example of such a procedure is given in Annex A. 6.6.5 In proficiency testing, it is not advisable to present a list of laboratories in terms of their performance. Therefore, in order to avoid misleading and misunderstanding, the ranking should be extremely cautious.
6.7 Communication with participants
6.7.1 Participants should be provided with a set of detailed information on participation in the proficiency testing scheme, such as a formal programme protocol. Follow-up contact with participants may be made through letters, newsletters and/or reports, combined with regular public meetings. Any changes in the design or operation of the scheme should be communicated to the participants promptly. 6.7.2 Participants should be able to raise this with the coordinator if they believe that the assessment of their competence in proficiency testing is incorrect. 6.7.3 Laboratories should be encouraged to provide feedback so that participants can contribute actively to the development of the scheme. 6.7.4 Procedures relating to corrective action taken by participants (particularly in relation to feedback to the accreditation body) are described in B/15483.2.
7 Confidentiality/ethical considerations
7.1 Confidentiality of records
It is usually the policy of most schemes to keep the identity of individual participants confidential. The identity of the participant is known only to a very small number of members of the coordinated scheme. This should also extend to any subsequent remedial recommendations or actions made to a laboratory which has demonstrated poor performance. In some cases the coordinating body may be required to report to a particular authority the poor competence of a participant involved in a test: however, participants should be informed of this possibility when they agree to participate in the scheme.
Participants in a proficiency testing group may choose to waive their membership in the group for the purpose of discussion and mutual assistance in improving their work. 7.2 Collusion and falsification of results
The purpose of proficiency testing is primarily to help participants improve their proficiency, but there may still be a tendency among participants to provide a falsely favorable impression of their proficiency. For example, collusion may occur between experiments, resulting in the submission of truly independent data. If a laboratory often performs a single analysis but reports the average of repeated measurements of a sample in proficiency testing, or additional replicates of a particular project, the laboratory may also give a false impression of its proficiency. Therefore, where feasible, proficiency testing should be designed to ensure that collusion and falsification of results are minimized. Although the coordinator takes all reasonable measures to prevent fraud, it is commendable that the participating laboratories have a responsibility to avoid fraud.4 Reports should be made available as soon as possible and within the time specified. Although, ideally, all the original data provided should be available to participants, this may not be possible in some large programmes. Participants should at least receive the results of all experiments in summary form (e.g. graphically). In some programmes, such as long-term test comparison programmes, interim reports should be sent to individual participants. 6.6 Competence evaluation
6.6.1 Where competence evaluation is required, the coordinator is responsible for ensuring that the evaluation method is appropriate to maintain the credibility of the programme. 6.6.2 Facilitators may seek the assistance of a technical advisor to provide expert review of the following aspects of laboratory capability: a) comparison of overall performance with previous expectations (with appropriate uncertainty); b) intra- and inter-laboratory variation (and comparison with previously planned or published precision data); c) differences between methods or procedures, if applicable; d) possible sources of error (in terms of extreme results) and suggestions for improving capability; e) any other suggestions, recommendations or general comments; f) conclusions.
6.6.3 During or after a particular programme, it may be necessary to periodically provide participants with various summary tables. For an ongoing programme, these tables may include an updated summary of the performance of each laboratory in each round. Such summaries may be further disaggregated and any significant trends indicated if requested.
6.6.4 There are various procedures for reviewing the performance of participants, either after the completion of a single programme or after the completion of rounds of a continuing programme. An example of such a procedure is given in Annex A. 6.6.5 In proficiency testing, it is not advisable to present a list of laboratories in terms of their performance. Therefore, in order to avoid misleading and misunderstanding, the ranking should be extremely cautious.
6.7 Communication with participants
6.7.1 Participants should be provided with a set of detailed information on participation in the proficiency testing scheme, such as a formal programme protocol. Follow-up contact with participants may be made through letters, newsletters and/or reports, combined with regular public meetings. Any changes in the design or operation of the scheme should be communicated to the participants promptly. 6.7.2 Participants should be able to raise this with the coordinator if they believe that the assessment of their competence in proficiency testing is incorrect. 6.7.3 Laboratories should be encouraged to provide feedback so that participants can contribute actively to the development of the scheme. 6.7.4 Procedures relating to corrective action taken by participants (particularly in relation to feedback to the accreditation body) are described in B/15483.2.
7 Confidentiality/ethical considerations
7.1 Confidentiality of records
It is usually the policy of most schemes to keep the identity of individual participants confidential. The identity of the participant is known only to a very small number of members of the coordinated scheme. This should also extend to any subsequent remedial recommendations or actions made to a laboratory which has demonstrated poor performance. In some cases the coordinating body may be required to report to a particular authority the poor competence of a participant involved in a test: however, participants should be informed of this possibility when they agree to participate in the scheme.
Participants in a proficiency testing group may choose to waive their membership in the group for the purpose of discussion and mutual assistance in improving their work. 7.2 Collusion and falsification of results
The purpose of proficiency testing is primarily to help participants improve their proficiency, but there may still be a tendency among participants to provide a falsely favorable impression of their proficiency. For example, collusion may occur between experiments, resulting in the submission of truly independent data. If a laboratory often performs a single analysis but reports the average of repeated measurements of a sample in proficiency testing, or additional replicates of a specific plan, the laboratory may also give a false impression of its proficiency. Therefore, where feasible, proficiency testing should be designed to ensure that collusion and falsification of results are minimized. Although the coordinator takes all reasonable measures to prevent fraud, it is commendable that the participating laboratories have a responsibility to avoid fraud.4 Reports should be made available as soon as possible and within the specified timeframe. Although, ideally, all the original data provided should be available to participants, this may not be possible in some large programmes. Participants should at least receive the results of all experiments in summary form (e.g. graphically). In some programmes, such as long-term test comparison programmes, interim reports should be sent to individual participants. 6.6 Competence Evaluation
6.6.1 Where competence evaluation is required, the coordinator is responsible for ensuring that the evaluation method is appropriate to maintain the credibility of the programme. 6.6.2 Facilitators may seek the assistance of a technical advisor to provide expert review of the following aspects of laboratory capability: a) comparison of overall performance with previous expectations (with appropriate uncertainty); b) intra- and inter-laboratory variation (and comparison with previously planned or published precision data); c) differences between methods or procedures, if applicable; d) possible sources of error (in terms of extreme results) and suggestions for improving capability; e) any other suggestions, recommendations or general comments; f) conclusions.
6.6.3 During or after a particular programme, it may be necessary to periodically provide participants with various summary tables. For an ongoing programme, these tables may include an updated summary of the performance of each laboratory in each round. Such summaries may be further disaggregated and any significant trends indicated if requested.
6.6.4 There are various procedures for reviewing the performance of participants, either after the completion of a single programme or after the completion of rounds of a continuing programme. An example of such a procedure is given in Annex A. 6.6.5 In proficiency testing, it is not advisable to present a list of laboratories in terms of their performance. Therefore, in order to avoid misleading and misunderstanding, the ranking should be extremely cautious.
6.7 Communication with participants
6.7.1 Participants should be provided with a set of detailed information on participation in the proficiency testing scheme, such as a formal programme protocol. Follow-up contact with participants may be made through letters, newsletters and/or reports, combined with regular public meetings. Any changes in the design or operation of the scheme should be communicated to the participants promptly. 6.7.2 Participants should be able to raise this with the coordinator if they believe that the assessment of their competence in proficiency testing is incorrect. 6.7.3 Laboratories should be encouraged to provide feedback so that participants can contribute actively to the development of the scheme. 6.7.4 Procedures relating to corrective action taken by participants (particularly in relation to feedback to the accreditation body) are described in B/15483.2.
7 Confidentiality/ethical considerations
7.1 Confidentiality of records
It is usually the policy of most schemes to keep the identity of individual participants confidential. The identity of the participant is known only to a very small number of members of the coordinated scheme. This should also extend to any subsequent remedial recommendations or actions made to a laboratory which has demonstrated poor performance. In some cases the coordinating body may be required to report to a particular authority the poor competence of a participant involved in a test: however, participants should be informed of this possibility when they agree to participate in the scheme.
Participants in a proficiency testing group may choose to waive their membership in the group for the purpose of discussion and mutual assistance in improving their work. 7.2 Collusion and falsification of results
The purpose of proficiency testing is primarily to help participants improve their proficiency, but there may still be a tendency among participants to provide a falsely favorable impression of their proficiency. For example, collusion may occur between experiments, resulting in the submission of truly independent data. If a laboratory often performs a single analysis but reports the average of repeated measurements of a sample in proficiency testing, or additional replicates of a specific plan, the laboratory may also give a false impression of its proficiency. Therefore, where feasible, proficiency testing should be designed to ensure that collusion and falsification of results are minimized. Although the coordinator takes all reasonable measures to prevent fraud, it is commendable that the participating laboratories have a responsibility to avoid fraud.
Tip: This standard content only shows part of the intercepted content of the complete standard. If you need the complete standard, please go to the top to download the complete standard document for free.