Article Text

Download PDFPDF
Extracting data from diagnostic test accuracy studies for meta-analysis
  1. Kathryn S Taylor1,
  2. Kamal R Mahtani1,
  3. Jeffrey K Aronson2
  1. 1 Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK
  2. 2 Centre for Evidence Based Medicine, University of Oxford, Oxford, UK
  1. Correspondence to Dr Kathryn S Taylor, Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford OX2 6GG, UK; kathryn.taylor{at}phc.ox.ac.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Diagnostic test accuracy (DTA) reviews are carried out to summarise and explore the evidence about the accuracy of specific tests.1 Consider a clinical question as an example:

What is the diagnostic accuracy of community-based point of care natriuretic peptide testing for chronic heart failure?

To answer this question, a DTA review will seek to systematically find and pool individual DTA data in the form of a meta-analysis. However, this can be challenging, particularly when dealing with diagnoses based on measurements with continuous distributions. This is because, typically, individual studies of DTA report the sensitivity and specificity of the test and not the necessary inputs for meta-analysis software, as recommended by the Standards for Reporting Diagnostic Accuracy statement for reporting standards in DTAs.2 The necessary inputs are:

  • The number with the condition who correctly test positive (true positives, TP).

  • The number without the condition who correctly test negative (true negatives, TN).

  • The number with the condition who incorrectly test negative (false negatives, FN).

  • The number without the condition who incorrectly test negative (false positives, FP).

Using these four numbers we can generate a 2×2 classification table that compares the test result with the ‘true value’ based on the reference test or ‘gold standard’ (figure 1).

Figure 1

Diagnostic accuracy 2×2 classification table, with totals added. FN, false negatives; FP, false positives; TN, true negatives; TP, true positives.

To perform meta-analysis, for each study, we need to enter these four numbers into …

View Full Text

Footnotes

  • Twitter @dataextips

  • Correction notice This article has been corrected since it appeared Online First. A Disclaimer endnote has been added.

  • Contributors KST and KRM conceived the idea of the series of which this is one part. KST wrote the first draft of the manuscript. All authors revised the manuscript and agreed the final version.

  • Funding This research was supported by the National Institute for Health Research Applied Research Collaboration Oxford and Thames Valley at Oxford Health NHS Foundation Trust.

  • Disclaimer The views expressed in this publication are those of the authors and not necessarily those of the National Institute for Health Research or the Department of Health and Social Care.

  • Competing interests KRM and JA were Associate Editors of BMJ Evidence Medicine at the time of submission.

  • Patient consent for publication Not required.

  • Provenance and peer review Commissioned; internally peer reviewed.