Original Article
A review of critical appraisal tools show they lack rigor: Alternative tool structure is proposed

https://doi.org/10.1016/j.jclinepi.2010.02.008Get rights and content

Abstract

Objectives

To evaluate critical appraisal tools (CATs) that have been through a peer-reviewed development process with the aim of analyzing well-designed, documented, and researched CATs that could be used to develop a comprehensive CAT.

Study Design and Setting

A critical review of the development of CATs was undertaken.

Results

Of the 44 CATs reviewed, 25 (57%) were applicable to more than one research design, 11 (25%) to true experimental studies, and the remaining 8 (18%) to individual research designs. Comprehensive explanation of how a CAT was developed and guidelines to use the CAT were available in five (11%) instances. There was no validation process reported in 11 CATs (25%) and 33 CATs (77%) had not been reliability tested. The questions and statements that made up each CAT were coded into 8 categories and 22 items such that each item was distinct from every other.

Conclusions

CATs are being developed while ignoring basic research techniques, the evidence available for design, and comprehensive validation and reliability testing. The basic structure for a comprehensive CAT is suggested that requires further study to verify its overall usefulness. Meanwhile, users of CATs should be careful about which CAT they use and how they use it.

Section snippets

Background

What is new?

  • Many critical appraisal tools (CATs) are designed and published while ignoring basic research and testing protocols.

  • The structure for a new CAT, based on the evidence available, is outlined.

  • When using a CAT, a reliability and validation process should be completed on the results obtained regardless of whether such data are available from other sources.

Critical appraisal, a key component of systematic reviews, is the thorough evaluation of research to identify the best articles on

Methods

The research design was a critical review of the literature, where a reviewer searched, categorized, and analyzed the literature [1]. The sampling method used was a nonprobability sample to saturation based on the following a priori inclusion criteria, exclusion criteria, and search strategy.

Results

On reading the 44 articles in the review, two different methods of analysis were undertaken. A descriptive quantitative analysis of the articles was completed first, which explored the structure, research methods, and analysis of the data used by the articles. Secondly, a qualitative analysis was completed, which explored the content of the questions or statements used within each CAT so that these questions or statements could be summarized and classified.

Discussion

From the range of research designs covered by the articles in this critical review, it seems there may be hope that the perception of systematic reviews being limited to true experimental studies is being overcome. It was also interesting to note the number of CATs for appraising qualitative research designs, all of which were developed in the 2000s. A great deal of discussion has occured about whether qualitative studies can or should be aggregated. However, given the importance of systematic

Conclusion

Perhaps the ultimate irony of critical appraisal is that as part of systemic reviews (arguably the pinnacle of scientific evidence) the tools used are based on each appraiser's concept of research quality. This dependence on a subjective measure may mean that a CAT cannot be developed that takes as its starting point a rational view of the research process as described here. However, given that this rational view is what exists, anyone appraising research should ensure that: the context of the

References (74)

  • S. Cesario et al.

    Evaluating the level of evidence of qualitative research

    J Obstet Gynecol Neonatal Nurs

    (2002)
  • R.H. DuRant

    Checklist for the evaluation of research articles

    J Adolesc Health

    (1994)
  • A.R. Jadad et al.

    Assessing the quality of reports of randomized clinical trials: is blinding necessary?

    Control Clin Trials

    (1996)
  • I. Boutron et al.

    A checklist to evaluate a report of a nonpharmacological trial (CLEAR NPT) was developed using consensus

    J Clin Epidemiol

    (2005)
  • A.P. Verhagen et al.

    The Delphi list: a criteria list for quality assessment of randomized clinical trials for conducting systematic reviews developed by Delphi consensus

    J Clin Epidemiol

    (1998)
  • T.C. Chalmers et al.

    A method for assessing the quality of a randomized control trial

    Control Clin Trials

    (1981)
  • M.J. Lichtenstein et al.

    Guidelines for reading case-control studies

    J Chronic Dis

    (1987)
  • D. Moher et al.

    Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement

    Lancet

    (1999)
  • K.S. Khan et al.

    Undertaking systematic reviews of research on effectiveness: CRD's guidance for those carrying out or commissioning reviews (CRD Report 4)

    (2001)
  • J.J. Deeks et al.

    Evaluating non-randomised intervention studies

    Health Technol Assess

    (2003)
  • M. Petticrew

    Systematic reviews from astronomy to zoology: myths and misconceptions

    BMJ

    (2001)
  • M. Dixon-Woods et al.

    How can systematic reviews incorporate qualitative research? A critical perspective

    Qual Res

    (2006)
  • A. Moyer et al.

    Rating methodological quality: toward improved assessment and investigation

    Account Res

    (2005)
  • P. Jüni et al.

    Systematic reviews in health care: assessing the quality of controlled clinical trials

    BMJ

    (2001)
  • K.J. Devers

    How will we know “good” qualitative research when we see it? Beginning the dialogue in health services research

    Health Serv Res

    (1999)
  • A.R. Jadad et al.

    Guides for reading and interpreting systematic reviews: II. How did the authors find the studies and assess their quality?

    Arch Pediatr Adolesc Med

    (1998)
  • P. Jüni et al.

    The hazards of scoring the quality of clinical trials for meta-analysis

    JAMA

    (1999)
  • A. Kuper et al.

    Critically appraising qualitative research

    BMJ

    (2008)
  • J.C. Valentine et al.

    A systematic and transparent approach for assessing the methodological quality of intervention effectiveness research: the Study Design and Implementation Assessment Device (Study DIAD)

    Psychol Methods

    (2008)
  • S. Armijo Olivo et al.

    Scales to assess the quality of randomized controlled trials: a systematic review

    Phys Ther

    (2008)
  • M.K. Cho et al.

    Instruments for assessing the quality of drug studies published in the medical literature

    JAMA

    (1994)
  • A.E. Bialocerkowski et al.

    Application of current research evidence to clinical physiotherapy practice

    J Allied Health

    (2004)
  • J. Burnett et al.

    Development of a generic critical appraisal tool by consensus: presentation of first round Delphi survey results

    Internet J Allied Health Sci Pract [serial on the Internet]

    (2005)
  • C.G. Maher et al.

    Reliability of the PEDro scale for rating quality of randomized controlled trials

    Phys Ther

    (2003)
  • A.M. Glenny

    No “gold standard” critical appraisal tool for allied health research

    Evid Based Dent

    (2005)
  • P. Katrak et al.

    A systematic review of the content of critical appraisal tools

    BMC Med Res Methodol

    (2004)
  • Cited by (187)

    View all citing articles on Scopus
    View full text