Specifications for tests of technical equipment specialized for meteorological observation—General requirements
Some standard content:
ICS07.060
YrrKAi-cJoakAa
Meteorological Industry Standard of the People's Republic of China
QX/T526—2019
Specifications for tests of technical equipment specialized for meteorological observation
meteorologicalobservationGeneralrequirements2019-12-26Release
China Meteorological Administration
ika-cJouakAa
2020-04-01Implementation
HKAJouaKAa
kAa-cJouaka
Normative reference documents
Terms and definitions
Basic requirements
Test conditions
Test plan
Test process
Test items and requirements
Data processing and analysis
Test results and evaluation
Test report
Integration of data Management and Archiving
Appendix A (Normative Appendix)
Appendix B (Normative Appendix)
Appendix C (Normative Appendix)
References
HiKA-cJouaKA
Reliability Test
Requirements and Methods for Environmental Test
Data Processing and Analysis
ika-cJouakA
QX/T526—2019
QX/T526-2019
Yika-JouakA
This standard was drafted in accordance with the rules given in GB/T1.1-2009. This standard was proposed and managed by the National Technical Committee for Standardization of Meteorological Instruments and Observation Methods (SAC/TC507). Drafting unit of this standard: Meteorological Observation Center of China Meteorological Administration. The main drafters of this standard are: Mo Yueqin, Wang Xiaolan, Zhang Xuefen, Chen Yao, Zhang Ming, Ren Xiaoyu, Wang Tiantian, Guo Qiyun, Gong Na. YiikAa-cJouakAa
1 Scope
YTKA-JouaKAa
General requirements for testing specifications for special technical equipment for meteorological observation
QX/T526—2019
This standard specifies the basic requirements, test conditions, test plans, test processes, test items and requirements, data processing and analysis, test results and evaluation, test reports and data collation and archiving for testing special technical equipment for meteorological observation. This standard applies to the testing and evaluation of special technical equipment for meteorological observation, but does not apply to the testing and evaluation of meteorological satellite-related equipment and technical equipment for artificial weather modification operations. 2 Normative references
The following documents are indispensable for the application of this document. For all dated references, only the dated version applies to this document. For any un-dated referenced documents, the latest version (including all amendments) shall apply to this document. Environmental testing for electric and electronic products Part 2: Test methods Test A: Low temperature GB/T2423.1 2008
Environmental testing for electric and electronic products Part 2: Test methods Test B: High temperature GB/T2423.2 2008
Environmental testing Part 2: Test methods Test Cab: Steady state damp heat test GB/T2423.3 2016
Environmental testing for electric and electronic products Part 2: Test methods Test Db: Cyclic damp heat (12h+GB/T2423.4 2008
12h cycle)
GB/T2423.17—2008
Part 2: Test methods Test Ka: Salt spray environmental testing for electric and electronic products
GB/T2423.21 2008 Environmental testing for electric and electronic products Part 2: Test methods Test M: Low pressure GB/T2423.24 Environmental testing Part 2: Test method Test Sa: Simulated solar radiation on the ground and its test guidelines GB/T2423.25
Environmental testing for electric and electronic products Part 2: Test method Test ZAM: Low temperature and low pressure combination GB/T2423.37—2006
Environmental testing for electric and electronic products Part 2: Test method Test L: Dust GB/T2423.382008
Environmental testing for electric and electronic products Part 2: Test method Test R: Water Test methods and guidelines
GB5080.7—1986
GB/T6587—2012
Equipment reliability test
Verification test of failure rate and mean time between failures under constant failure assumptions General specification for electronic measuring instruments
GB/T94 14.3-2012 Maintainability Part 3: Verification and data collection, analysis and presentation Reliability test for electronic measuring instruments
GB/T114631989
GB/T13983 Basic terms for instruments and meters
Electromagnetic compatibility test and measurement technology Electrostatic discharge immunity test requirements GB/T17626.2
GB/T17626.3
Electromagnetic compatibility test and measurement technology Radio frequency electromagnetic field radiation immunity test GB/T17626.4
GB/T17626.5
GB/T17626.6
Electromagnetic compatibility test and measurement technology Electrical fast transient pulse group immunity test Electromagnetic compatibility test and measurement technology Surge (shock) immunity test Electromagnetic compatibility test and measurement technology
Radio frequency field induced conducted disturbance immunity Test GB/T17626.8
Electromagnetic compatibility test and measurement technology Power frequency magnetic field immunity test GB/T17626.11
Electromagnetic compatibility test and measurement technology
Immunity test for voltage sags, short interruptions and voltage variations GB31221—2014 Environmental protection specification for meteorological detection Ground meteorological observation station
YkAa-cJouaki
QX/T526—2019
YTKA-JouaKAa
GB/T37467 Terminology of meteorological instruments
GJB899A-2009: Reliability identification and acceptance test JF1059.1 Evaluation and expression of measurement uncertainty 3 Terms and definitions
The terms and definitions defined in GB/T13983 and GB/T37467 and the following terms and definitions apply to this document. 3.1
Meteorological observation special technical equipment Meteorological observation special technical equipment is a general term for equipment, instruments, meters, consumables and corresponding software systems specially used in the field of meteorological observation. 3.2
Dynamic comparison test dynamic state test is to test the special technical equipment for meteorological observation under natural environmental conditions, and evaluate the integrity and accuracy of its observation data, the stability and reliability of equipment operation, the consistency of measurement results of instruments of the same model and the comparability with the meteorological observation network. Note: The accuracy of observation data in dynamic comparison test is different from the accuracy of laboratory test data. In dynamic comparison test, there is a clear comparison standard. Accuracy refers to the degree of consistency with the comparison standard; if there is no clear comparison standard, accuracy refers to the degree of consistency with the changing trend of the comparison instrument (correlation). 3.3
Integrity of observation data
Integrity of observation data
Parameters that characterize the ability of the tested product to obtain observation data. Note 1: In this standard, it is referred to as integrity, and is expressed as the percentage of the number of actual observed data at the test sample data terminal during a certain observation period to the number of data that should be observed. Integrity (%) = (actual number of observed data/number of data that should be observed) × 100%. Note 2: It can also be expressed as the missing rate, that is, the percentage of the number of missing data of the test sample to the number of data that should be observed, missing rate (%) minus (number of missing data/number of data that should be observed) × 100%.
Accuracy of observation data
accuracy of observation data reflects the quality of data obtained from the test product. Note 1: In this standard, it is referred to as accuracy
Note 2: It is characterized by an error interval, usually expressed in the form of (-ks.+ks), where n is the systematic error, s is the standard deviation, and k is the confidence factor. 3.5
Stability of equipment operation
stabilityofequipmentoperation The ability of the tested product to keep its measurement characteristics unchanged over time or under environmental stress. Note: In this standard, it is referred to as stability.
Reliability of equipment operation
reliability of equipment operation The ability of the tested product to complete the specified function under specified conditions and within a specified time. Note: In this standard, it is referred to as reliability.
Consistency of measurement result
consistencyof measurementresult Under the same measurement conditions, when multiple (2 or more) tested samples of the same model measure the same measured meteorological element at the same time, the degree of consistency between the measurement results.
Note: In this standard, it is referred to as consistency.
comparability
comparability
The degree of consistency between the measurement results when the tested sample and the same type of instrument of the meteorological observation network or the specified same type of instrument measure the same meteorological element at the same time.
Note: In this standard, it is referred to as comparability.
4 Basic requirements
4.1 Test samples
4.1.1 Sample types
YTIKA-JouaKAa
The following special technical equipment for meteorological observation should be tested: newly developed;
there are major improvements in principles, technologies, methods, structures, materials and processes; there are obvious changes in functions or measurement performance; it is intended to be included in meteorological business applications, including imported products; other commissioned tests and assessments.
4.1.2 Sampling method
QX/T526—2019
Large equipment should provide 1 or more test samples, and the rest should provide 3 or more. For disposable observation instruments or consumable equipment, the number of test samples should be appropriately increased according to the test needs. When it is necessary to extract some products from a batch of test products for testing, the test party should randomly sample and determine the test samples.
Example 1: Large equipment, such as weather radar, wind radar, wind profiler radar, etc. Example 2: Disposable observation instruments or consumable equipment, such as sonde, balloon, etc. 4.2 Test regulations
4.2.1 Records should meet the following requirements:
The inspection, test, installation, trial, maintenance, repair and withdrawal of the test samples should be recorded in a special record book or electronic document. The record content should at least include: start and end time, main content, environmental conditions, result summary, etc.; all original data, records and paper documents generated by calculation, collation, proofreading, etc. during the test process should be signed or registered by the person in charge. If there is any mistake, it shall not be painted, traced, scratched, etc. The mistaken content should be crossed out with a horizontal line, and the correct content should be written next to the crossed-out data, and the person who made the change should sign it. If there are other problems, they should be explained in writing separately; during the test, the temperature, humidity and air pressure of the laboratory should be measured and recorded. 4.2.2 During the test, the tested party shall be responsible for the technical support of the tested samples when the testing party deems it necessary, but shall not interfere with the test work. 4.3 Handover inspection
4.3.1 During the handover inspection, personnel from both the tested party and the testing party shall be present to confirm the tested samples and their supporting equipment, including necessary instructions for use or maintenance.
4.3.2 The tested party shall deliver the samples to the testing party, fill in the customer entrustment form or sign the testing service contract. After delivery, the tested samples shall be kept by the testing party.
If the tested samples are not clearly numbered, they may be renumbered, and the numbers shall be firmly and reliably marked in an obvious place on the outside of the tested samples.
4.4 Termination/suspension and resumption of the test
4.4.1 The test should be terminated if any of the following situations occurs: 1. The main function inspection result is unqualified;
YTikAa-cJouakAi
QX/T526-2019
YKA-JouaKAa
Important measurement performance test results are unqualified; 1. The main environmental test results are unqualified; Www.bzxZ.net
1. Serious faults or defects occur during the test that affect observation and endanger personal safety; 1. The test party debugs or modifies the test product without permission. 4.4.2 For unqualified items or eliminable faults and defects that can be resolved on the test site in a short period of time, the test can be resumed after the test is suspended, but the unqualified items or other affected items should be retested. If the problem cannot be solved on the test site or the situation given in 4.4.1 occurs, the test should be terminated.
4.4.3 Appearance and structural defects that affect normal observation or the maintainability and testability of the test sample itself can be judged as unqualified and the test should be terminated.
5 Test conditions
5.1 Laboratory environmental conditions
Generally, the natural environmental conditions of the laboratory should be maintained. There should be no interference sources that affect the measurement performance and data acquisition of the standard instrument, test equipment and test sample in or around the laboratory, such as electromagnetic radiation, thermal radiation, vibration and noise. 5.2 On-site environmental conditions
5.2.1 The detection environment of ground measurement instruments should generally comply with the provisions of 3.2 of GB31221-2014. If there are special requirements, they shall be specified separately.
5.2.2 For weather radar, wind measurement radar, wind profile radar, sounding signal receiver, etc., the objects and ground objects around the installation site should not have a significant impact on the measurement results, and there should be no interference sources that affect the normal operation of the test sample, such as electromagnetic radiation, vibration, etc. 5.2.3 The test site should be selected according to the environmental adaptability requirements of the test sample, and the limit value of the meteorological parameter close to the use environment requirements of the test sample should be selected as much as possible.
5.2.4 The number of test sites should be determined according to the sampling situation of the samples. Large equipment should be determined according to the actual situation; the test samples intended to be used in the national observation station network should be tested in 2 or more different climate zones; the test samples used in special areas should be tested in the corresponding areas.
5.3 Dynamic comparison test time
The test time should be determined according to Table A, 1 in Appendix A, and it should not be less than 3 months: If the time of the dynamic comparison test exceeds the deadline of the reliability test, the test should be terminated according to the time of the dynamic comparison test. For the test samples that require long-term continuous work in all weather conditions, at least spring (autumn), summer and winter should be used. The test samples used seasonally should be tested in the corresponding seasons; the test samples used intermittently can be tested for not less than 3 months under actual application conditions. If there are other regulations, they shall be carried out in accordance with the regulations. 5.4 Standards and test equipment
5.4.1 The standards/comparison standards used in static tests and dynamic comparison tests shall be traceable and have valid verification, calibration or inspection certificates. The additional errors caused by the test equipment and accessories used shall not reduce their accuracy level. 5.4.2 The comparison standards used in dynamic comparison tests shall first be standard measuring instruments specified by the World Meteorological Organization or the meteorological authority of the State Council under natural atmospheric conditions.
5.4.3 The comparison standards used in dynamic comparison tests shall at least have the same accuracy level as the test samples. 4
YikAa-cJouakAi
6 Test plan
YTIKA-JouaKAa
QX/T526—2019
6.1 The tester shall formulate a test plan based on the relevant standards or technical requirements of the test product and this standard. The test plan includes a technical plan (or test outline) and a work plan. 6.2 In principle, all functional requirements, measurement performance, environmental adaptability, reliability and maintainability, safety and other items specified in the relevant standards or technical requirements should be tested. If there are items that do not need to be tested, or the test conditions are not met, or if additional items need to be added, they should be stated in the technical proposal. 6.3 For items that the tester does not have the test conditions, it can be entrusted to a unit with test qualifications, and the unit shall provide the corresponding test report or certificate.
6.4 The test method should be provided for each test. Test items with measurement error requirements should explain the standard instrument/comparison standard instrument used, test equipment, test conditions, test points and the number of measurements and sampling intervals at each test point, as well as the method of data calculation and processing. 6.5 For the test of various measurement performance, static tests should be carried out first to determine the qualification of the measurement range, resolution and allowable error. Only the test samples that have passed the static test can be subjected to dynamic comparison test. 6.6 The technical proposal should provide specific environmental test items, stress and action time, as well as the methods and requirements for pretreatment, initial testing, performance testing, recovery and final testing of the test samples.7 The contents of the work plan mainly include: test time, location and maintenance and other matters. The test site should be selected according to the requirements of the test product for environmental adaptability. The requirements for the test site and suggestions for specific test sites should be put forward in the work plan. 6.8 After the test plan is formulated, its feasibility should be discussed with the test party. 7 Test process
The test is usually carried out in the following steps:
Appearance and structure inspection;
Function detection;
Electrical performance test;
Safety test;
Measurement performance test:
Environmental test (climatic environment and mechanical environment); electromagnetic compatibility test;
Dynamic comparison test;
Measurement performance retest;
Data processing and analysis;
Write test report;
Data collation and archiving
7.2 a) to g) and i) in 7.1 are usually carried out in the laboratory, and h) is usually carried out at the site of use. 7.3 The test process can be appropriately adjusted according to different working conditions and test items. The main purpose of the test is to verify the environmental adaptability of the tested product. The environmental test f) or electromagnetic compatibility test g) in 7.1 can be placed before the measurement performance test e). If the extreme conditions of the environmental test may have an adverse effect on the measurement performance of the tested product, the environmental test f) or electromagnetic compatibility test g) can be placed after the measurement performance retest item i). For disposable observation instruments or consumable equipment, such as sounding instruments and balloons, the measurement performance retest item 1 is not performed. 8.1.1 Visual inspection is usually carried out by visual inspection, mainly to check the surface coating and product markings, etc. 8.1.2 Structural inspection is usually carried out by visual inspection combined with manual adjustment. Tools can be used when necessary. Mainly check whether the structure is reasonable, whether there is mechanical damage and rotation jamming, etc.
8.1.3 When necessary, it can include checking the size and weight of the test sample 8.1.4 After the appearance and structure inspection: If necessary, the parts for adjusting the measurement base point of the test sample can be sealed 8.1.5 If there is an interchangeability requirement, an interchangeability inspection should be carried out. 8.2 Functional inspection
8.2.1 Functional inspection is carried out by actual operation, which can be carried out simultaneously with appearance and structural inspection, or special inspection items can be set. Functions should generally include but not be limited to the following items: sampling, calculation and storage methods of instantaneous observation values; data processing methods; data display and printing; data interface and signal transmission; power supply mode and power adaptability: clock travel error; fault detection and alarm; other functions specified in technical requirements. 8.2.2 The sampling interval time, average/smoothing time and method of the test sample admission data should be verified in practice and compared with the data corresponding to the technical indicators. 8.2.3 The data processing software of the test sample, including the selection of extreme values of meteorological elements, calculation of derived quantities and business application parameters, should be checked for the correctness of its calculation formula, and the calculation error can be given if necessary. 8.2.4 The data interface and signal transmission function of the test sample should be checked for reliability, and if necessary, specific parameters such as transmission rate and error rate can be given through actual data transmission tests. 8.2.5 Based on Beijing time, measure the clock running error in actual work. 8.2.6 The fault detection and alarm functions should be actually tested. The fault and alarm conditions can be set manually for observation and judgment. If necessary, the fault detection rate and alarm error rate or correct rate can be given. 8.2.7 For items whose test results do not meet the technical requirements, adjustments and resets should be allowed. If they are still unqualified, the test should be terminated. 8.3 Electrical performance test
8.3.1 Electrical performance usually includes but is not limited to the following items: - Power consumption of the entire test sample and subsystem; battery life;
impedance, bandwidth, rate and time interval of wired transmission; - transmission frequency, power, spectrum, pulse width and antenna directivity diagram of wireless transmission; sensitivity, bandwidth and actual receiving effect of wireless transmission receivers and wired transmission terminal equipment; other specified electrical performance parameters.
8.3.2 The power consumption is usually calculated by measuring the input voltage and current of the power supply. If the power consumption of the sample under test is large or the load is inductive or capacitive 6
YkAa-cJouaki
YTIKA-JouaKAa
QX/T526—2019
It should be measured with an electric meter for more than 2 hours. The power consumption is calculated by dividing the power consumption (kW·h) recorded by the electric meter by the time. 8.3.3 Battery life measurement:
Measure in actual use, connect a voltmeter in parallel and an ammeter in series in the discharge circuit, record the discharge time, and calculate the actual capacity (A·h);
Measure the actual capacity (A·h) of the battery, and then use the actual capacity and the power consumption of the equipment to calculate the life. Perform actual capacity test according to the battery discharge curve. Avoid over-discharge of the battery during the test. Example: Connect a heating resistor with low impedance to the output end of the battery to increase the discharge current I (ampere, A). Connect an ammeter in series in the discharge circuit and record the discharge time t (hour.h). The battery capacity P (ampere-hour, A·h) is calculated as P=IXt. 8.3.4 For the test of wireless transmission parameters, refer to GB/T12649-2017. 8.4 Safety test
8.4.1 Safety mainly includes: contact current, dielectric strength and protective grounding. 8.4.2 The contact current shall be tested and evaluated in accordance with 5.8.1 of GB/T6587-2012; the dielectric strength shall be tested and evaluated in accordance with 5.8.2 of GB/T6587-2012; and the protective grounding shall be tested and evaluated in accordance with 5.8.3 of GB/T6587-2012. 8.5 Measurement performance test
8.5.1 Sample size
8.5.1.1 The sample size includes the test points and the number of measurements at each test point. The test points should be selected within the entire measurement range of the test sample. For output characteristics that are linear or nearly linear, the test points are usually evenly distributed. If the output characteristics are nonlinear, the number of test points should be appropriately increased in the part with a larger curvature of the output characteristic curve of the test sample, and the number of test points should be appropriately reduced in the part with a smaller curvature. 8.5.1.2 For the frequently used measurement sections within the range of the measured meteorological elements, the number of test points can be appropriately increased, and for the uncommon measurement sections, the number of test points can be appropriately reduced.
8.5.1.3 The upper and lower limits of the measurement range of the test sample and the representative measurement points of its measurement characteristics must be selected. Such as 0℃, 1013.25hPa, etc.
8.5.1.4 If the test party has provided the verification/calibration curve of the test sample output characteristics and provided the verification/calibration points, the test points should be avoided or far away from the verification/calibration points provided by the test party as much as possible. 8.5.1.5 The number of measurements for each test point should usually be no less than 10 times, and the number of measurements for all test points should be the same. 8.5.2 Test requirements
8.5.2.1 If the output characteristics of the test sample may produce hysteresis/return errors, the test should adopt the round-robin method, and each test point should have data with different rising and falling trends, and the number of measurements for the rising and falling trends should be the same. The test should be carried out in the order of the size of the test points. 8.5.2.2 If the hysteresis/return error of the test sample can be ignored, or the round-robin method may produce additional errors, the fixed-point test method can be used, that is, all the samples required for each test point are continuously recorded at each test point. Before recording data each time, it should be ensured that each test data is independent.
8.5.2.3 If the measuring sensor of the test sample uses a sensitive element that is only allowed to be used once, a method of measuring multiple (10 or more) test samples separately should be adopted, and each test sample should be measured only once at each test point. 8.5.2.4 The stabilization time of the test point is determined according to the time constant of the test sample, and the stabilization time should be more than 5 times the time constant. When several test samples with different time constants are measured at the same time, the stabilization time should be determined by the maximum value of the time constant. 8.5.3 Retest
8.5.3.1 Only items related to stability in the measurement performance shall be retested after the dynamic comparison test is completed. 8.5.3.2 The tested samples shall be kept in their original state without re-maintenance, i.e. in the original state of the dynamic comparison test, and shall not be repaired, calibrated7
YikAa-cJouakAi
QX/T526-2019
YKAi-JouaKAa
and adjusted, and the environmental pollution and corrosion conditions shall be checked. Simple maintenance such as surface dust removal can be performed during retesting. 8.5.3.3 The standards, test equipment, test methods and test conditions used in the retest should be consistent with those used in the initial test. The retest results should not be corrected.
8.5.3.4 Normally, the test points and the number of measurements for each test point in the retest and initial test should be the same. The test points and the number of measurements for each test point can also be appropriately reduced as needed.
8.5.3.5 If the retest result is unqualified, maintenance can be carried out, but recalibration should not be performed. After maintenance, the retest can be carried out again. If it is still unqualified, the measurement result of the test sample will be treated as unqualified. 8.6 Dynamic comparison test
8.6.1 Dynamic comparison tests are usually carried out under natural atmospheric conditions, and different test items are selected according to different test purposes. Tests are usually selected from the following items:
Integrity of data acquisition;
Measurement accuracy;
Stability of equipment operation;
Consistency of measurement results of the same type of test samples; Reliability and maintainability of equipment;
Comparability with the same element observation instruments used in the meteorological observation network: g) Comparability with the specific instruments or observation methods specified by the user; h
The influence of various influencing factors on the measurement results of the test samples 8.6.2 All test samples should be tested for items a), c), d) and e) in 8.6.1; if there is a dynamic comparison standard, test b should be carried out; if the test sample is to be included in the existing meteorological observation network or may form a new meteorological observation network, test f) should be carried out; whether to select test g) is determined according to user requirements; if necessary, test h) should be carried out. Tests a) and h) should be carried out simultaneously. 8.6.3 The test sample and the comparison standard should be installed in the same observation field, and the installation method should remain basically consistent. Any instrument should not destroy the natural air flow field near any other instrument, and its installation location should not affect the observation of any meteorological elements. 8.6.4 To check the consistency of the measurement results of the same type of test samples, two or more test samples should be installed at the same test site. 8.6.5 The consistency test of active remote sensing products can use the method of multiple alternating detection to record data to avoid mutual interference. 8.6.6 The consistency comparison test of the radiosonde adopts the method of adjusting the frequency and releasing multiple test radiosondes on the same ball. 8.6.7 During the entire test, no adjustment or calibration should be made to the software and hardware of the test sample, and the correction value of the test instrument or the computer software used to calculate the measurement results should not be changed. Otherwise, the stability test should be repeated. 8.6.8 Whether it is the test sample or the comparison standard, the data of the dynamic comparison test should take its terminal output value as the measurement result. The time interval and average/smoothing time of data collection of each comparison instrument should be the same. If data with different sampling intervals and different average/smoothing times are to be recorded to analyze and compare the differences in the dynamic characteristics of both parties, a data processing and evaluation plan should be formulated in advance. 8.6.9 For the test samples that can be observed at any time or the observation time can be changed by program setting, the interval between each two adjacent observations should be greater than 5 times the maximum time constant of the comparison standard and the test sample. The time constant of the test sample should include the time of the sensor and its signal processing.
8.6.10 If the change of the measured value is small, the time interval for recording data can be appropriately increased, otherwise it can be appropriately reduced. Each instrument should record data at the same time.
8.6.11 For ground meteorological observation instruments using contact measurement sensors, the first comparison observation after installation should be carried out after it is fully balanced with the natural environmental conditions.
8.6.12 For ground meteorological observation instruments, the data samples of dynamic comparison test should be no less than 60; the comparison release of sounding instruments should be no less than 30 times; the data samples of remote sensing equipment that continuously collects data at a high density should be no less than 1000. 8.6.13 Data collection for dynamic comparison can be carried out intermittently as needed, but the test samples should always be set up at the test site during the specified test period and should not be moved indoors or re-set up at other test sites.
Tip: This standard content only shows part of the intercepted content of the complete standard. If you need the complete standard, please go to the top to download the complete standard document for free.