CASE REPORT


https://doi.org/10.5005/jp-journals-10054-0205
Indian Journal of Medical Biochemistry
Volume 26 | Issue 1 | Year 2022

Avoiding False Rejection in Comparison Studies


Vishal Wadhwa1https://orcid.org/0000-0003-3666-5859, Puneet Kumar Nigam2

1,2Department of Quality Assurance, Metropolis Healthcare Limited, New Delhi, India

Corresponding Author: Vishal Wadhwa, Department of Quality Assurance, Metropolis Healthcare Limited, New Delhi, India, Phone: +91 9811109715, e-mail: vishal2870@yahoo.com

How to cite this article: Wadhwa V, Nigam PK. Avoiding False Rejection in Comparison Studies. Indian J Med Biochem 2022;26(1):37–40.

Source of support: Nil

Conflict of interest: None

Received on: 15 October 2022; Accepted on: 16 November 2022; Published on: 03 January 2023

ABSTRACT

Aim: Comparison studies are an integral part of method verification in a laboratory. Three cases shared clarify how acceptance criteria should be used carefully before rejecting a new method/analyzer.

Background: Verification of a new method/analyzer is undertaken by all laboratories, but care needs to be exercised in the evaluation of results due to the complexity involved and the lack of proper guidance in the literature.

Case description: Three examples comparing the results of two equipment have been shared to bring out the unique case scenarios that emerge in the interpretation of important parameters involved, namely, average bias% (B%), correlation (R), and strength of correlation (R2).

Conclusion: Results of the comparison study should be evaluated with caution as all parameters may not perform as per acceptance criteria despite having no adverse impact on patient results.

Clinical significance: Poorly evaluated verification studies can lead to false rejection of new methods leading to an increase in the cost of patient care.

Keywords: Bias%, Comparison, Correlation.

BACKGROUND

As coronavirus disease-2019 (COVID-19) pandemic waves keep coming so do new laboratories to ease the ever-increasing demand in the diagnostic sector. As this number grows so does the number of users seeking guidance on the performance and interpretation of a method verification exercise. When laboratories perform comparison studies most are concluded as conforming to acceptance criteria; however, few fail as they did not meet the acceptance criteria resulting in the rejection of the new method/ analyzer. Through this manuscript, we endeavor to bring out nuances involved in the interpretation of the comparison study which is an integral part of any method verification study to prevent false rejection.

CASE DESCRIPTION

Following are the three cases discussed: The first one is a classical case where the result was acceptable; the second looks like an unacceptable comparison but is not, and the third is a true case of result failure because of the wrong comparator and not because of analyzer did not perform up to the standards committed by the manufacturer.

The results of testing samples on the comparator (method A) and the new analyzer (method B) were entered in an excel format as shown in Figures 1 and 2. The percentage difference was calculated. That is, (method B − method A) × 100/Method A. A regression analysis plot was prepared, and results were evaluated both statistically and graphically.

Fig. 1: Percentage bias, R and R2 values are acceptable. Verification pass

Fig. 2: Percentage bias acceptable; R and R2 values are not acceptable. Select data points to cover a wide range

Fig. 3: The R and R2 values are acceptable; Percentage bias is unacceptable. Verification to be done with the appropriate comparator

DISCUSSION

A comparative study is performed to establish manufacturer claims on accuracy in the laboratory. Evaluation can be difficult for new laboratories; despite the availability of acceptance criteria on bias % in Ricos et al. guidelines (Available at: http://www.westgard.com).1

An error in interpretation can lead to acceptance of a failed method and false rejections. Through the cases discussed we have brought out two pertinent issues; one that methods be compared to correct peer for tests where different methods are likely to give totally different results (have different biological reference intervals) and second that while comparing results of an analyte which are tightly controlled, for example, chloride and sodium despite there being a narrow difference in results of two methods both R and R2 fail due to statistical limitations. In such a case, the laboratory should conclude on basis of the average bias obtained.

CLINICAL SIGNIFICANCE

The cases sighted provide laboratory on how to interpret the results of comparative studies and conclude on acceptance or rejection of the new analyzer/methods.

ORCID

Vishal Wadhwa https://orcid.org/0000-0003-3666-5859

REFERENCE

1. Dr. Carmen Ricos and the Biodatabase. Available at: http://www.westgard.com Assessed on: 1 February 2022.

________________________
© The Author(s). 2022 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by-nc/4.0/), which permits unrestricted use, distribution, and non-commercial reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.