Full article title Development and implementation of an LIS-based validation system for autoverification toward zero defects in the automated reporting of laboratory test results
Journal BMC Medical Informatics and Decision Making
Author(s) Jin, Di; Wang, Dezhi; Wang, Jiajia; Li, Bijuan; Cheng, Yating; Mo, Nanxun; Deng, Xiaoyan; Tao, Ran
Author affiliation(s) Jinan Kingmed Center for Clinical Laboratory, Guangzhou Medical University
Primary contact Email: Online form
Year published 2021
Volume and issue 21
Article # 174
DOI 10.1186/s12911-021-01545-3
ISSN 1472-6947
Distribution license Creative Commons Attribution 4.0 International
Website https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-021-01545-3
Download https://bmcmedinformdecismak.biomedcentral.com/track/pdf/10.1186/s12911-021-01545-3.pdf (PDF)

Abstract

Background: For laboratory informatics applications, validation of the autoverification function is one of the critical steps to confirm its effectiveness before use. It is crucial to verify whether the programmed algorithm follows the expected logic and produces the expected results. This process has always relied on the assessment of human–machine consistency and is mostly a manually recorded and time-consuming activity with inherent subjectivity and arbitrariness that cannot guarantee a comprehensive, timely, and continuous effectiveness evaluation of the autoverification function. To overcome these inherent limitations, we independently developed and implemented a laboratory information system (LIS)-based validation system for autoverification.

Methods: We developed a correctness verification and integrity validation method (hereinafter referred to as the "new method") in the form of a human–machine dialog. The system records personnel review steps and determines whether the human–machine review results are consistent. Laboratory personnel then analyze the reasons for any inconsistency according to system prompts, add to or modify rules, reverify, and finally improve the accuracy of autoverification.

Results: The validation system was successfully established and implemented. For a dataset consisting of 833 rules for 30 assays, 782 rules (93.87%) were successfully verified in the correctness verification phase, and 51 rules were deleted due to execution errors. In the integrity validation phase, 24 projects were easily verified, while the other six projects still required the additional rules or changes to the rule settings. Taking the Hepatitis B virus test as an example, from the setting of 65 rules to the automated releasing of 3,000 reports, the validation time was reduced from 452 (manual verification) to 275 hours (new method), a reduction in validation time of 177 hours. Furthermore, 94.6% (168/182) of laboratory users believed the new method greatly reduced the workload, effectively controlled the report risk, and felt satisfied. Since 2019, over 3.5 million reports have been automatically reviewed and issued without a single clinical complaint.

Conclusion: To the best of our knowledge, this is the first report to realize autoverification validation as a human–machine interaction. The new method effectively controls the risks of autoverification, shortens time consumption, and improves the efficiency of laboratory verification.

Keywords: autoverification, correctness verification, integrity validation, human–computer interaction, risk management, laboratory information system

Background

Autoverification—the use of automated computer-based rules to initially validate laboratory test results[1]—is a powerful tool for the batch processing of test results and has been widely used in recent years. It has obvious advantages in reducing reporting errors, shortening turnaround time (TAT), and improving audit efficiency.[1][2][3][4][5]

Current status and challenges

Our self-developed autoverification system has been used for six years in many disciplines, such as biochemistry, immunology, hematology, microbiology, molecular diagnostics, and pathology. To date, 25,487 rules have been set. The system judges test results 1.1 million times a day and provides audit recommendations for 250,000 report forms, accounting for 87% of the total number of report forms. Approximately 80,000 reports are automatically generated every day. To ensure the effectiveness and safety of the autoverification system, its validation process is very important. The College of American Pathologists' laboratory accreditation checklist item GEN.43875[6] and International Organization for Standardization's ISO 15189:2012 requirement 5.9.2b[7] both require that autoverification systems undergo functional verification before use.

According to published studies, in laboratories that use autoverification, the majority of laboratories have performed personnel-based and automatic system audits with the same results—manually recorded consistency—and reached a conclusion after a statistical analysis of the results.[2][4][8][9] The manual verification method is less difficult to operate but has the following limitations:


  1. Massive validation workload: Based on the requirements of WS/T 616-2018 (a China Health Organization recommended standard)[10] for validation of the autoverification of quantitative clinical laboratory test results, every test and every sample type involved in the autoverification procedure should be tested; the validation time should be no less than three months and/or the number of reports released should be no less than 50,000; and periodic verification should be performed every year for no less than 10 working days and/or for no less than 5,000 reports. The validation workload is large, and it is difficult to rely on manual comparison and recording, which greatly increases the post-analytical workload.
  2. Reporting risk: During manual verification, personnel are prone to inertia or judgment errors. The lack of a system control mechanism for this kind of validation can generate reporting risks and directly affect clinical diagnosis and treatment.[2]

Therefore, there is an urgent need to design a verification method that minimizes the workload and systematically controls risks. We report a rule verification system with a small workload and ease of operation that can be used as a reference for self-built and automatic test auditing for laboratories and manufacturers.

Methods

System design

Based on the Clinical and Laboratory Standards Institute's (CLSI) AUTO10 standard[11] and current review processes, we established an autoverification system including 11 rule categories. Technicians set the rules according to audit requirements and rule categories. Each item can set multiple rules, including limited range check, combined mode judgment, Delta check, sampling time validity judgment, sample abnormality judgment (e.g., hemolysis, lipemia), and quality control check. The autoverification system determines whether the report is abnormal according to the rules. Tests that do not trigger contradiction mode are displayed in green, while failed tests (triggering rules, contradictory modes set by the rules) are displayed in red, and the cause of the contradiction is indicated. If all the tests in the report are green, the barcode of the report is also green. If any test in the report is red, the report shows a red barcode, which signals a warning in the system.

According to the above steps, the autoverification system displays colors and abnormal prompts after judging the rules in a process called automatic early warning. The automatic warning is only for judgment and is not involved in the decision to issue a report. Based on this, the system automatically sends out a report with a green barcode in a process called automated reporting. Automatic early warning and automatic reporting comprise autoverification. This system is especially useful in the review of complex diagnostic projects (e.g., molecular diagnostics, pathological testing). These projects prompt absurd values from personnel. For some moderately complex projects (e.g., biochemical, blood), the combination of report reviewing, automatic warning, and automated reporting is equivalent to the autoverification system in a large number of literature reports and laboratory information system (LIS) automatic reports. The autoverification process used by our laboratory is shown in Fig. 1.


Fig1 Jin BMCMedInfoDecMak21 21.png

Figure 1. The autoverification process. Single test results must meet all the warning rules at the same time. The autoverification algorithm can identify those samples requiring manual review that do not meet the laboratory’s criteria for autoverification. If the automated reporting switch is not activated, then reports that pass the automatic warning step are manually issued. If the automated reporting switch is turned on and all tests on the report pass their warning rules, then the system automatically releases the report.

Validation scheme

On the premise that automatic audits are divided into automatic warnings and automatic reports, we divide the verification system into two stages. The first stage is called correctness verification, which verifies that the operation of the rules is consistent with the expectations set by the personnel. If there is a problem, the responsible party may be the program development department. The second stage is called integrity validation. Based on the results from the first stage, this stage verifies whether the set rules include all the elements from the personnel’s audit report. The functional design of the two-stage system is shown in Table 1.

Table 1. Two validation methods designed for two parts of the autoverification system
Phase Object Validation method Explanation Inconsistent solutions
Automatic warning Warning rules Correctness verification To verify that the warning rules behave as expected and produce the expected outcome If the warning rule setting is wrong, delete and reset the rules
Automated reporting Laboratory tests Integrity validation To confirm that the laboratory test results that pass the automatic warning can be reported automatically Add more warning rules according to the laboratory report criteria

Correctness verification

The correctness verification phase confirms whether the execution of a single rule is correct. It is implemented as follows: (1) For newly added rules, the system adds the label "Pending Verification." (2) When the report is reviewed, the system displays the rule judgment result, and a purple color block is displayed to remind the staff to judge whether the execution result of the "Pending Verification" rule is correct. (3) The staff input the judgment result. (4) The system changes the rule status according to the staff input. If it is consistent, the rule label is set to "verified," prompting the personnel to continue to the next stage of verification. If it is inconsistent, the staff is prompted to delete the rule. Figure 2 is a diagram representing this correctness verification process using the example of C-reactive protein (CRP). Figure 3 shows an example of the correctness verification interface.


Fig2 Jin BMCMedInfoDecMak21 21.png

Figure 2. Schematic diagram of the correctness verification using the example of C-reactive protein (CRP). The CRP test result was 1.8 mg/l and passed quality control. The autoverification system searched all the rules for the CRP and hit two of them, No. 001879 and No. 002009. The No. 001879 rule (verified) checks whether the CRP result has passed the quality control. The No. 002009 rule (pending verification) intercepts the results greater than or equal to 5. Therefore, when No. 002009 is triggered, the warning information of the sample appears purple, indicating that the technician needs to confirm whether the warning result is consistent with the manual judgment. In the correctness verification interface (as shown in the subsequent Fig. 3), the system provides two options, the human–machine judgment is consistent or the system judges incorrectly. The technician can confirm that the rule is performing correctly and change its status to “verified.”

Fig3 Jin BMCMedInfoDecMak21 21.png

Figure 3. Correctness verification interface. The result of CRP passes automatic warning according to the No.002009 rule and displays green. The technician judges whether the automated warning operates correctly.

Integrety validation

Integrity validation can be started only after the correctness verification of all rules of a project is completed. It is implemented as follows. (1) After the report shows the result of the automatic warning, if the system detects that the report has been changed, a dialog box will pop up and ask the reviewer to select the reason for the modification. These reasons include (a) a rule execution error, (b) a rule setting value that is inappropriate, (c) the required addition of new rules, (d) the lack of involvement of other issues related to automatic review, and (e) automatic warning and prompt modification. The LIS records the modified content and the reasons for personnel analysis. (2) If the laboratory wants to implement automated reporting, a validation number, such as 5,000, can be set according to the complexity of the project review. (3) If the automatic warning result of the report is green (approved), the personnel will issue the report directly, and the validation number of the report will automatically increase by one. (4) If the validation number of all items on the report exceeds the set number, the report will be automatically released. (5) If the automatic warning result of the report is green (approved), but the result is modified, with the reason for the modification specified as any of a, b, or c, then the LIS will clear the validation number for the related items and stop automated reporting. Figure 4 shows the integrity validation process. The validation goals and validation amount for six projects are shown in Fig. 5.


Fig4 Jin BMCMedInfoDecMak21 21.png

Figure 4. The integrity validation process.

Fig5 Jin BMCMedInfoDecMak21 21.png

Figure 5. Integrity validation target number settings and recording interface. The validation targets of the six projects in the above figure are all 3000, and the validation number is between 1900 and 2500. The corresponding reports cannot be released automatically.

Accuracy guarantee

The accuracy of the autoverification includes whether the rules can be identified and whether there are omissions (completeness) in the report review. Therefore, our method confirms the accuracy of the new method from these two aspects. We perform function correction and system improvement through correctness verification and integrity validation. In the validation system, we design the following logic to ensure the accuracy of the function:

  1. The new rule is automatically deleted if it fails the correctness verification within 10 days.
  2. The rule is not allowed to be modified.
  3. If the rule fails the correctness verification, it is forbidden to be converted.
  4. If the autoverification of a single project fails the integrity validation, the historical validation amount is cleared.

Data collection

The validation data of 30 assays from October 2019 to January 2020 were collected for analysis, and in total, 833 early warning rules were obtained. A total of 926,195 reports was used to evaluate the accuracy of the new method.

Time consumption statistics

We used HBV as an example to introduce the comparison of the validation time before and after the new method was used. In the measurement of the validation time, we divided the complete autoverification into 10 stages. Time statistics were collected for manual verification and new method verification for each step. We used systematic records and estimates to develop time statistics for different stages.

Satisfaction survey

We used questionnaires to evaluate the effectiveness of new methods used by laboratory technicians. The survey was launched using the online tool WJX, which feeds back the percentage of responses and the total numbers.

Results

Correctness verification results

Among the 833 rules, 782 (93.88%) were successfully verified for correctness, with a total of 3,814 validations, including 2,230 (58.47%) released tests and 1,584 (41.53%) intercepted tests. The inconsistencies were verified, and 51 (0.06%) error rules were deleted. The reasons for verification failure are shown in Table 2.

Table 2. List of reasons for correctness verification failure
Error type Proportion (%) Sample Solution
Human error 63.3 Incorrect English letter case in the text of the rules, resulting in no warning Reset the rules
Specific warning target 24.9 Early warning of diagnostic results and microscopy results in a special report interface for pathology Add a supplementary algorithm code
Algorithm code error 8.4 HPV typing results could not be verified with the Delta Check; the results of the microbial project identification could not be correlated with a variety of drug sensitivity combinations Fix the algorithm code
Software compatibility problem 3.4 Problem with the precision of the number comparison script Fix the algorithm code

Integrity validation results

We collected integrity validation data through system export and department feedback. The reasons provided for rule modification were automatic warning prompt and rule modification (5, 10.6%), rule execution error (0, 0%), improper setting values (15, 31.9%), new rule added (18, 38.3%), and no automatic warning involving other questions (9, 19.2%). The integrity of all projects was verified within one month, and the problems found are shown in Table 3.

Table 3. List of reasons why integrity validation failed
Test Reason for not passing Solution
HPV genotyping There was no comprehensive analysis of the combined thin-layer cytology results Analyze the results associated with thin-layer cytology
Urea The limit range was too wide Reduce the limit range
Albumin Review of the detection system produces an error Specify the detection system
CBC Test results were checked only on the same day as the barcode Extend the backdating of the historical results
HBsAg HBsAb HBeAg HBeAb HBcAb Not all composite mode scenarios were covered Add a joint audit of the portfolio project results
Cortisol There was no warning of abnormal rhythms Add a rule about checking sampling time

Comparison of the two methods

The comparison of manual record analysis and the new method for different steps is shown in Table 4. The new method performs four automation steps, reduces the personnel workload, and automatically controls the enabling and disabling of automatic report release through system monitoring report modification. The increased accuracy verification can quickly eliminate rule setting exceptions and development loopholes while reducing the time needed for personnel analysis. The manual record analysis and the new method took 452 hours and 275 hours to complete, respectively.

Table 4. Comparison of the time consumption (hours) of the two methods for verifying HBV reports for 3,000 cases. In the measurement of the validation time, we divided the complete autoverification into 10 stages. In steps 4–6, in total, 3,000 reports are used for statistics. The time consumption of the consistent work content in the new and old methods is subject to the following: the manual timing of the old method, such as steps 1, 2, 4, and 9; the inconsistent steps in the two methods; the new steps that are recorded in the system, such as step 3; the saving step time clearing, such as steps 5, 6, 7, and 10; and the remaining steps that are estimated, such as step 8. For automatic implementation, the time is calculated as zero. aReasons for invalid locking rules. bReduced workload. cControlled risks.
Steps Manual validation (h) New method (h)
1. Set 65 rules 1.5 1.5
2. Perform Rule 130 test 2.5 2.5
3. Correctness verification 0 0.25a
4. Personnel comparison report and results review 240 240
5. Record comparison result 100 0b
6. Analysis of the verification number 10 0b
7. Determine whether to activate automatic approval 5 0b
8. Personnel analysis of the reasons for inconsistent audit results 90 30
9. Add and modify rules 1 1
10. Determine whether to turn off autoverification 1 0c
Total 452 275

Satisfaction survey

After using the new method for one year, we conducted a satisfaction survey of laboratory personnel who used the function. We distributed 182 questionnaires and recovered 168 of them, with a response rate of 92.3%. The survey results showed that 94.6% of laboratory users believed that the new method could greatly reduce the workload, effectively control the report risk, and produce satisfactory or very satisfactory assessments of the new method.

Discussion

The core of the use of autoverification lies in the validation of system functions and rules. Due to the complexity of these rules, it is impossible to find all the functional defects by relying solely on function validation before the system goes online, and even human input errors cannot be carried out in the validation.[12] Such functional defects must be found in actual application scenarios with multiple different rule settings, such as the incorrect input of full-width symbols, that is, correctness verification. Furthermore, the premise of rule verification is to include a review of the logic of all reviewers in the system, which can be discovered only in actual application scenarios. Additionally, integrity validation can be performed in actual application scenarios to truly find problems.[13][14]

We initially designed the system in two parts—automatic warning and automated reporting—to allow complex detection items (molecular and pathological examinations, final human reports, and system prompt errors) to be included in the automatic review. Laboratory technicians could then choose to address the needs of different measurements. These two parts correspond to two verification steps: the automatic warning portion that performs correctness verification, and the automated reporting portion that performs integrity validation.

Compared with other systems reported in the literature[4], the advantages of the new method are mainly simplifying the verification process, reducing the verification workload, and ensuring the accuracy of the verification results. As shown in Fig. 6, in the manual verification scheme, the junior staff will review the results, and then the intermediate staff will repeat reviewing the results and combine the autoverification results to determine the human–machine consistency to complete the autoverification function validation. The whole process focuses on human leadership. The new method uses system monitoring to judge the accuracy of the autoverification based on the operational trajectories of different personnel. By interacting with personnel, the system collects validation data and controls the operation of autoverification.


Fig6 Jin BMCMedInfoDecMak21 21.png

Figure 6. Schematic diagram of the process comparison between the manual method and the new method

We delineate the advantages of the new method, compared to manual validation, in Table 5.

Table 5. Comparison of the advantages of the new method and manual verification
Advantages Difference Manual validation New method Explanation
Efficiency improvement Whether to add extra workload? YES NO No additional personnel are required to manually record the reason for the inconsistency. The new method is that the system completes judgment and records while personnel review the reports normally. The system will control the operation of the autoverification program based on the consistency results.
Can the cause of inconsistency be quickly determined? NO YES The main reasons for the inconsistency are abnormal rule settings and lack of necessary rules. The new method correspondingly sets up correctness verification and integrity validation for these two main reasons. In different verification stages, only the main reason for that stage can be traced back.
Risk control Is it possible to skip the validation process? YES NO Starting from setting the rules, the system will pull the validation process, and no validation link can be skipped.
Whether to ensure sufficient amount of validation data? NO YES In the process of normal personnel issuance, the system will truthfully record the validation data. Before the set data volume is reached, the automated reporting function is prohibited.
Can autoverification be used in the case of failed validation? YES NO When the system confirms that the validation fails due to a defect in the autoverification, it will prohibit the rule conversion or the automated reporting from being enabled.

Compared with the traditional method, the true positives and false positives of the "personal and machine-based audit results" are easy to understand, but if the indicators are abnormal, it can be difficult to find the cause of this abnormality, especially after all the reasons are verified after thousands of reports are released.[15] Consequently, the audit scenario has become blurred in the auditor's memory, and it becomes inefficient to check the problems one by one. The process-based validation scheme that we developed is more practical and advantageous: (1) It can be easily operated and quickly initialized; (2) its self-traction and control of online functions can ensure that every rule is fully verified; and (3) the amount of manual work is small, allowing technicians to complete the verification steps during their daily work.

We divided the entire validation into two modules—correctness verification and integrity validation—based on the concept of process management. Rules are the basic unit of the entire autoverification system. If basic rule verification is not performed at the beginning of the entire process, when the human–machine judgment is inconsistent, it is difficult to confirm whether the problem is caused by algorithm error, execution error, or another reason, inevitably increasing the analysis workload. In contrast, if correctness verification is completed when the rules are established, the only reason for an inconsistency between man and machine during the release of the report issuance would be "rule omission," requiring the technician to add only the corresponding rules.

During the entire verification process, we implemented human–computer interaction, which includes the following:

  1. An "expected sense of play": Before the laboratory personnel view the results, they already possess a logical expectation, and in the process, they establish a comparison of the rules and effects.
  2. The use of visual stimulation methods (red, green, and purple backgrounds): These methods can be quickly identified and relax the laboratory personnel.
  3. System pull: Once the verification succeeds or fails, it is automatically counted with the click of a button, which automatically opens the automatic report function. All the functions ensure that laboratory personnel, particularly those of the new generation, can derive enjoyment from completing the verification process, thus increasing its core value.[16]

According to the experience of this research, the logic of the autoverification validation process is not difficult, but if it is applied to other laboratories on a large scale, the intermediate software supplier needs to develop the original autoverification system. The validation system is based on the autoverification system developed by our laboratory, so it is more compatible in adding new functions. However, as a supplementary function, it is difficult to graft to existing systems. We suggest that peers can refer to the program logic provided in this study. On the basis of the current functions, we will further strengthen the learning ability of the validation system and convert validation records into learning cases that can serve as a guide for laboratory technicians to use the autoverification function more efficiently.

Conclusions

In the two years that our online validation has been in use, there have never been any defects or reporting risks due to autoverification. We believe that for both intermediate and self-built autoverification systems, online validation is a useful tool for controlling the risks of autoverification and improving the quality of reports. The detailed process for this method can serve as reference for the development and implementation of LIS-based autoverification systems.

Abbreviations

AUTO 10-A: Autoverification of Clinical Laboratory Test Result 10-A; Approved Guideline

CBC: Complete blood cell count

CLSI: Institute of Clinical and Laboratory Standardization

CRP: C-reactive protein

LIS: Laboratory information system

HBcAb: Hepatitis B virus core antibody

HBeAb: Hepatitis B virus e antibody

HBeAg: Hepatitis B virus e antigen

HBsAb: Hepatitis B virus surface antibody

HBsAg: Hepatitis B virus surface antigen

HBV: Hepatitis B virus

HPV: Human papilloma virus

TAT: Turnaround time

Acknowledgements

We would like to gratefully acknowledge the technicians working in the laboratory for their helpful collaboration. In addition, we thank Xinyu Li and Jiazhen Ren for their technical support.

Contributions

All of the authors had full access to all of the data in the study and taking responsibility for the content of the manuscript. RT conceived and designed the study. DJ, QW, BJL, DZP and JJW performed the case and sample collection, analysis, and interpretation of the data. YTC, XYD and NXM performed the analysis with constructive discussions.DJ wrote the first draft of the paper. RT reviewed and approved the final manuscript. All authors have read and approved the final manuscript.

Funding

This research was supported by Guangdong Medical Science and Technology Research Fund (Program Grant A2020597). The funding bodies were not involved in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.

Ethics approval and consent to participate

This study was approved by the ethics review board of KingMed Diagnostics. The study adhered to relevant guidelines and regulations. The patient consent was waived by the approving ethics review board, as utilization of anonymized history data does not require patient consent.

Availability of data and materials

All data generated or analyzed during this study are included in this published article. The data underlying this study are available and researchers may submit data requests to the corresponding author on reasonable request.

Competing interests

The authors declare that they have no competing interests.

References

  1. 1.0 1.1 Li, J.; Cheng, B; Ouyang, H. et al. (2018). "Designing and evaluating autoverification rules for thyroid function profiles and sex hormone tests". Annals of Clinical Biochemistry 55 (2): 254–63. doi:10.1177/0004563217712291. PMID 28490181. 
  2. 2.0 2.1 2.2 Wang, Z.; Peng, C.; Kang, H. et al. (2019). "Design and evaluation of a LIS-based autoverification system for coagulation assays in a core clinical laboratory". BMC Medical Informatics and Decision Making 19 (1): 123. doi:10.1186/s12911-019-0848-2. PMC PMC6609390. PMID 31269951. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6609390. 
  3. Wu, J.; Pan, M.; Ouyang, H. et al. (2018). "Establishing and Evaluating Autoverification Rules with Intelligent Guidelines for Arterial Blood Gas Analysis in a Clinical Laboratory". SLS Technology 23 (6): 631–40. doi:10.1177/2472630318775311. PMID 29787327. 
  4. 4.0 4.1 4.2 Randell, E.W.; Short, G; Lee, N. et al. (2018). "Strategy for 90% autoverification of clinical chemistry and immunoassay test results using six sigma process improvement". Data in Brief 18: 1740-1749. doi:10.1016/j.dib.2018.04.080. PMC PMC5998219. PMID 29904674. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5998219. 
  5. Randell, E.W.; Short, G; Lee, N. et al. (2018). "Autoverification process improvement by Six Sigma approach: Clinical chemistry & immunoassay". Clinical Biochemistry 55: 42–8. doi:10.1016/j.clinbiochem.2018.03.002. PMID 29518383. 
  6. College of American Pathologists (21 August 2017). "Laboratory General Checklist - CAP Accreditation Program" (PDF). https://elss.cap.org/elss/ShowProperty?nodePath=/UCMCON/Contribution%20Folders/DctmContent/education/OnlineCourseContent/2017/LAP-TLTM/checklists/cl-gen.pdf. 
  7. "ISO 15189:2012 Medical laboratories — Requirements for quality and competence". International Organization for Standardization. November 2012. https://www.iso.org/standard/56115.html. 
  8. Palmieri, R.; Falbo, R.; Caoowllini, F. et al. (2018). "The development of autoverification rules applied to urinalysis performed on the AutionMAX-SediMAX platform". Clinica Chimica Acta 485: 275–81. doi:10.1016/j.cca.2018.07.001. PMID 29981288. 
  9. Sediq, A.M.-E., Abdel-Azeez, A.G.H. (2014). "Designing an autoverification system in Zagazig University Hospitals Laboratories: Preliminary evaluation on thyroid function profile". Annals of Saudi Medicine 34 (5): 427–32. doi:10.5144/0256-4947.2014.427. PMC PMC6074554. PMID 25827700. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6074554. 
  10. "WS/T 616-2018 (WST 616-2018)". Chinese Standard. 20 August 2018. https://www.chinesestandard.net/PDF/English.aspx/WST616-2018. 
  11. "AUTO10 Autoverification of Clinical Laboratory Test Results, 1st Edition". Clinical and Laboratory Standards Institute. 31 October 2006. https://clsi.org/standards/products/automation-and-informatics/documents/auto10/. 
  12. van Rossum, H.H. (2020). "An approach to selecting auto-verification limits and validating their error detection performance independently for pre-analytical and analytical errors". Clinica Chimica Acta 508: 130–6. doi:10.1016/j.cca.2020.05.026. PMID 32416173. 
  13. Krasowski, M.D.; Davis, S.R.; Drees, D. et al. (2014). "Autoverification in a core clinical chemistry laboratory at an academic medical center". Journal of Pathology Informatics 5 (1): 13. doi:10.4103/2153-3539.129450. PMC PMC4023033. PMID 24843824. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4023033. 
  14. Jones, J.B. (2013). "A strategic informatics approach to autoverification". Clinics in Laboratory Medicine 33 (1): 161–81. doi:10.1016/j.cll.2012.11.004. PMID 23331736. 
  15. Fu, Q.; Ye, C.; Han, B. et al. (2020). "Designing and Validating Autoverification Rules for Hematology Analysis in Sysmex XN-9000 Hematology System". Clinical Laboratory 66 (4). doi:10.7754/Clin.Lab.2019.190726. PMID 32255287. 
  16. Guidi, G.C.; Poli, G.; Bassi, A. et al. (2009). "Development and implementation of an automatic system for verification, validation and delivery of laboratory test results". Clinical Chemistry and Laboratory Medicine 47 (11): 1355–60. doi:10.1515/CCLM.2009.316. PMID 19817645. 

Notes

This presentation is faithful to the original, with only a few minor changes to presentation, though grammar and word usage was substantially updated for improved readability. In some cases important information was missing from the references, and that information was added. For this version, a definition of "autoverification" was added to the introductory sentence of the background.