CASE REPORT |
https://doi.org/10.5005/jp-journals-10054-0205
|
Avoiding False Rejection in Comparison Studies
1,2Department of Quality Assurance, Metropolis Healthcare Limited, New Delhi, India
Corresponding Author: Vishal Wadhwa, Department of Quality Assurance, Metropolis Healthcare Limited, New Delhi, India, Phone: +91 9811109715, e-mail: vishal2870@yahoo.com
How to cite this article: Wadhwa V, Nigam PK. Avoiding False Rejection in Comparison Studies. Indian J Med Biochem 2022;26(1):37–40.
Source of support: Nil
Conflict of interest: None
Received on: 15 October 2022; Accepted on: 16 November 2022; Published on: 03 January 2023
ABSTRACT
Aim: Comparison studies are an integral part of method verification in a laboratory. Three cases shared clarify how acceptance criteria should be used carefully before rejecting a new method/analyzer.
Background: Verification of a new method/analyzer is undertaken by all laboratories, but care needs to be exercised in the evaluation of results due to the complexity involved and the lack of proper guidance in the literature.
Case description: Three examples comparing the results of two equipment have been shared to bring out the unique case scenarios that emerge in the interpretation of important parameters involved, namely, average bias% (B%), correlation (R), and strength of correlation (R2).
Conclusion: Results of the comparison study should be evaluated with caution as all parameters may not perform as per acceptance criteria despite having no adverse impact on patient results.
Clinical significance: Poorly evaluated verification studies can lead to false rejection of new methods leading to an increase in the cost of patient care.
Keywords: Bias%, Comparison, Correlation.
BACKGROUND
As coronavirus disease-2019 (COVID-19) pandemic waves keep coming so do new laboratories to ease the ever-increasing demand in the diagnostic sector. As this number grows so does the number of users seeking guidance on the performance and interpretation of a method verification exercise. When laboratories perform comparison studies most are concluded as conforming to acceptance criteria; however, few fail as they did not meet the acceptance criteria resulting in the rejection of the new method/ analyzer. Through this manuscript, we endeavor to bring out nuances involved in the interpretation of the comparison study which is an integral part of any method verification study to prevent false rejection.
CASE DESCRIPTION
Following are the three cases discussed: The first one is a classical case where the result was acceptable; the second looks like an unacceptable comparison but is not, and the third is a true case of result failure because of the wrong comparator and not because of analyzer did not perform up to the standards committed by the manufacturer.
The results of testing samples on the comparator (method A) and the new analyzer (method B) were entered in an excel format as shown in Figures 1 and 2. The percentage difference was calculated. That is, (method B − method A) × 100/Method A. A regression analysis plot was prepared, and results were evaluated both statistically and graphically.
Case 1: The average bias obtained was 0.94%, acceptable bias taken was 5.0%. The R and R2 values are above 0.975 and 0.95, respectively, and hence considered to be passed (refer to Fig. 1).
Case 2: The average bias obtained was 3.4%, acceptable bias taken was 5.0%. The R and R2 values are below 0.975 and 0.95 and hence failed.
Review: The spread obtained by the difference of results (A − A) was between 0.1 and 9, but the regression equation (Fig. 2) is showing a constant error of 69.829 (equation: y = Ax + b, where B is the constant error). Which obviously is not correct and hence the equation should not be used to arrive at a conclusion. To prove this, presume a new data point as 50 and 48 for methods A and B, respectively (this will not disturb the average bias), and add to the existing data point and it will be seen that both R and R which were failing earlier are now within acceptable limits (Fig. 2).
Case 3: The average bias obtained was −38.956, acceptable bias taken was 14.6 but both R and R2 are considered to be passed.
Review: It was found that methods A and B were run on kits that had different reference intervals for the parameter under study (i.e., amylase and lipase). Later when method B was compared with a peer kit the average bias was 10.5% which was acceptable (Fig. 3).
DISCUSSION
A comparative study is performed to establish manufacturer claims on accuracy in the laboratory. Evaluation can be difficult for new laboratories; despite the availability of acceptance criteria on bias % in Ricos et al. guidelines (Available at: http://www.westgard.com).1
An error in interpretation can lead to acceptance of a failed method and false rejections. Through the cases discussed we have brought out two pertinent issues; one that methods be compared to correct peer for tests where different methods are likely to give totally different results (have different biological reference intervals) and second that while comparing results of an analyte which are tightly controlled, for example, chloride and sodium despite there being a narrow difference in results of two methods both R and R2 fail due to statistical limitations. In such a case, the laboratory should conclude on basis of the average bias obtained.
CLINICAL SIGNIFICANCE
The cases sighted provide laboratory on how to interpret the results of comparative studies and conclude on acceptance or rejection of the new analyzer/methods.
ORCID
Vishal Wadhwa https://orcid.org/0000-0003-3666-5859
REFERENCE
1. Dr. Carmen Ricos and the Biodatabase. Available at: http://www.westgard.com Assessed on: 1 February 2022.
________________________
© The Author(s). 2022 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by-nc/4.0/), which permits unrestricted use, distribution, and non-commercial reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.