Article Text

Download PDFPDF

What does expert opinion in guidelines mean? a meta-epidemiological study
  1. Oscar J Ponce1,
  2. Neri Alvarez-Villalobos1,
  3. Raj Shah2,
  4. Khaled Mohammed3,
  5. Rebecca L Morgan4,
  6. Shahnaz Sultan5,
  7. Yngve Falck-Ytter6,
  8. Larry J Prokop7,
  9. Philipp Dahm8,
  10. Reem A Mustafa9,
  11. Mohammad H Murad1
  1. 1 Evidence-based Practice Center, Mayo Clinic, Rochester, Minnesota, USA
  2. 2 Department of Medicine, University of Missouri-Kansas City, Kansas, Missouri, USA
  3. 3 Pediatric Residency Program, University of Minnesota, Minneapolis, Minnesota, USA
  4. 4 Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Canada
  5. 5 Division of Gastroenterology, Hepatology, and Nutrition, University of Minnesota, Center for Chronic Diseases Outcomes Research, Minneapolis Veterans Affairs Healthcare System, Minneapolis, Minnesota, USA
  6. 6 Division of Gastroenterology, Case Western Reserve University, Cleveland, Ohio, USA
  7. 7 Mayo Clinic Libraries, Rochester, Minnesota, USA
  8. 8 Department of of Urology, University of Minnesota, Minneapolis, Minnesota, USA
  9. 9 Division of Nephrology and Hypertension, University of Kansas Medical Center, Kansas City, Kansas, USA
  1. Correspondence to Dr Mohammad H Murad, Evidence-based Practice Center, Mayo Clinic, Rochester, MN 55905, USA; murad.mohammad{at}

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Summary box

What is already known about this subject?

  • Many guidelines use the term expert opinion in recommendations when evidence is considered to be insufficient.

  • Guidelines that use the term expert opinion are often viewed as less rigorous.

  • It is unclear what expert opinion means.

What are the new findings?

  • Many guidelines that used the term expert opinion did not provide a rationale.

  • Expert opinion recommendations often contained evidence of various types.

  • Expert opinion recommendations were most often used to describe indirect evidence.

How might it impact clinical practice in the foreseeable future?

  • Explicit description of evidence type, such as indirect evidence, may add clarity and transparency and may ultimately improve uptake of recommendations.


Clinical practice guidelines are systematically developed statements that intend to assist clinicians and patients in making decisions about appropriate healthcare in specific circumstances.1 2 Empirical evidence shows that adherence to guidelines improves patient outcomes.3–5 One of the essential requirements of a trustworthy guideline2 is that it should be based on systematic reviews of the best available evidence and includes assessment of the quality of evidence. This construct, quality of evidence, has been described as the certainty that a true effect lies on one side of a specified threshold or within a chosen range.6 The GRADE (Grading of Recommendations, Assessment, Development and Evaluation) Working Group has provided an approach for evaluating the quality of evidence depending on several discrete domains, which include study limitations, indirectness, imprecision, inconsistency and publication bias.5

Nevertheless, in the daily language of clinicians and in many scientific publications including guidelines, the term expert opinion (EO) is commonly used. Intuitively, clinical decision making requires expertise, which makes depending on EO logical. Several depictions of the evidence pyramid consider EO as a level of evidence and place it at the bottom of the pyramid as a unique category, or combined with preclinical studies and case reports, implying low validity. However, the term ‘opinion’ is defined in dictionaries as a ‘generally held view’, a ‘belief’ or a ‘judgement formed in the mind about a particular matter’.7 These definitions are not fully congruent with the definition of evidence as an empirical observation. Clinical practice guidelines that are based on EO may be viewed negatively and can be considered less trustworthy.5 Yet clinical expertise remains critical in decision making. The GRADE approach to EO has therefore been that EO should not be used as a separate category of evidence; rather, clinical expertise is considered as an essential ingredient for interpreting evidence of all study types and to formulate evidence-based recommendations.

Therefore, we sought to critically evaluate contemporary clinical practice guidelines in which the supporting evidence for specific recommendations was described as EO to determine the rationale of guideline developers for using EO and explore factors that are associated with EO use. We hypothesised that EO may reflect several types of evidence that can potentially be described using better semantics and rated more appropriately using the standard quality of evidence domains. More accurate grading of evidence may lead to more clarity and transparency and may reduce the perceived tension between evidence and expertise.


This meta-epidemiological study uses a systematic review approach and follows appropriate reporting guidelines.8

Search strategy

A comprehensive search developed by an expert librarian (LJP) with input from study investigators was conducted from 1 January 2010 to 3 June 2016 to capture contemporary published guidelines. We searched the following databases: Ovid MEDLINE In-Process & Other Non-Indexed Citations, Ovid MEDLINE and Ovid EMBASE. Controlled vocabulary supplemented with keywords was used to search for clinical practice guidelines (online supplementary appendix). We also conducted hand searches (internet search) for websites of several professional societies and entities that we knew produce guidelines and use EO designation (Infectious Diseases Society of America, Centers for Disease Control and Prevention, American Psychological Association, American Academy of Family Physicians, American Society for Reproductive Medicine, American Society of Clinical Oncology, American Urological Association, American Heart Association and American Diabetes Association).

Supplementary Material

Supplementary Appendix 1

Guidelines were included if they reported EO as part of their quality of evidence assessment system or if the strength of recommendation grading system had a level in which they state that EO was the only basis of the recommendation. We excluded guidelines that did not use EO as a grade. We piloted the process of study selection using five guidelines to ascertain common understanding among reviewers.

Data collection

Teams of paired reviewers independently extracted data using a prepiloted screening and abstraction form. We assigned several a priori categories for the type of evidence cited as EO, which included extrapolation from research studies (randomised, observational, case series/case reports or basic science), and for providing the rationale for using EO recommendations. We also assessed whether EO recommendations were better described as good practice statements (ie, statements supported by a wealth of indirect evidence and no direct evidence9). We estimated the observed agreement among reviewers for assessing the type of evidence cited in EO recommendations. Consensus was reached by discussion with a third reviewer. We extracted the following variables: publication year, society/organisation that developed the guideline, guideline field, presence of methodology experts in the development of the guideline (identified through authorship contribution, affiliation or by searching authors’ background on Google), declared conflicts of interest, definition of EO, use of systematic reviews, and characteristics of EO recommendations such as the stated rationale for using EO and the type of evidence cited as EO.

Statistical analysis

We used descriptive statistics to summarise the epidemiology of EO recommendations (prevalence, trend over time) and the rationale and evidence type cited in EO recommendations. We explored characteristics of EO recommendations associated with citing evidence (rate of EO per guideline, rationale for EO, presence of a methodologist on the guideline team and type of systematic review process used in the guideline). We used Pearson’s χ2 test or Fisher’s exact test to compare nominal variables. For continuous variables, we used one sample Kolmogorov-Smirnov test to test the normality of distribution. Unpaired Student’s or Mann-Whitney U test was used to compare between groups. A two-tailed p value <0.05 was considered statistically significant. The statistical analyses were done using SPSS V.20.0.


Search results

We assessed 1106 references for eligibility. We included 69 guidelines that provided 2390 recommendations, of which 907 (37.9%) were labelled as having a level of evidence designated as EO. The process of guideline selection is depicted in figure 1.

Figure 1

Guideline selection process.

The proportion of guidelines with EO recommendations increased from 2010 to 2016, from 6 (8.7%) to 17 (24.6%), respectively. Twenty-four (34.8%) out of 69 included a methodologist in the development of the guideline, and almost all the guidelines declared to have at least one author with a financial conflict of interest (91.3%). All guidelines used systematic reviews to select evidence (59.4% used published reviews or conducted rapid reviews and 40.6% commissioned or conducted their own). The most common clinical fields addressed by guidelines were endocrinology (58%), urology (18.8%) and orthopaedic surgery (7.2%). The characteristics of these guidelines are summarised in table 1. The observed agreement among reviewers about the underlying type of evidence used in EO recommendations was 77%.

Table 1

Characteristics of the included guidelines

Rationale and evidence type labelled as EO

Only seven societies provided a definition of EO (online supplementary table 1). These definitions suggested lack of evidence, evidence development in the context of research, depending on physiology, bench research or panel’s own clinical experience. The majority of EO recommendations did not provide a rationale for using EO (828/907, 91%) and only a few (79/907, 8.7%) stated lack of evidence as rationale.

Supplementary Material

Supplementary Table 1

Evaluation of 30.4% (276/907) EO recommendations in which any type of evidence was cited reveals that the most common reason for using EO was extrapolation of evidence from studies that did not quite answer the question of the guideline (40.2% from randomised trials, 38% from observational studies). Other less common reasons were deriving inferences from case reports and case series (2.2%) (figure 2). A few of EO recommendations (2.5%) stated that their evidence was extrapolated from a different population, and we judged 5.6% of EO recommendations as ones that could have potentially been described as good practice statements. In online supplementary table 2, we provide a list of all included guidelines.

Supplementary Material

Supplementary Table 2
Figure 2

Evidence type cited as expert opinion. Categories overlap and proportions do not sum to 100%. The denominator of ‘Good practice statement’ and ‘Extrapolated from a different population’ is all expert opinion recommendations, whereas other categories have the denominator of expert opinion recommendations that reported evidence type.

Having a methodologist as a part of a guideline team was associated with increased rate of EO recommendations per guideline (p=0.03), with using ‘lack of evidence’ as a rationale for the EO recommendations (p=0.04), with increased rate of EO recommendations per guideline that cited evidence (p=0.01) and with guidelines conducting or commissioning their own systematic reviews (p<0.01). Guidelines that commissioned or conducted their own systematic reviews also had increased rate of EO recommendations per guideline that cited evidence (compared with guidelines that did not use systematic reviews or used existing or rapid reviews; p<0.01). Citing evidence in EO recommendations and the proportion of EO per guideline recommendations were not significantly associated with declaring conflicts of interest (p>0.05) (table 2).

Table 2

The effect of a methodologist role and systematic review process on expert opinion recommendations


Main findings

We conducted a meta-epidemiological study to evaluate published contemporary clinical practice guidelines in terms of their use of EO as a level of evidence. We found that in this sample of guidelines, EO use was very common. We also found that the majority of EO-based recommendations did not explicitly provide a rationale for the use of EO. In the overwhelming majority of situations where EO recommendations had evidence cited, it seemed that EO was in fact not an opinion, but rather an extrapolation of evidence from other populations or settings. As such, it was not congruent with the definition of opinion and could have been better described, in our view, as indirect evidence.10 Indirect evidence, which is evidence that does not directly answer our question, may warrant lower certainty, but not always. Evidence can be indirect in several ways such as when patients in the study differ from those of interest, when the intervention tested in the study differs from the intervention of interest, or when the study uses a surrogate endpoint. Decisions regarding the importance of indirectness (how much it lowers the quality of evidence) depend on our understanding of whether biological and other factors are sufficiently different to the extent that one might expect substantial differences in the direction or magnitude of effect.10

Some of the EO recommendations were actually dependent on case series and case reports, which usually (but not always) warrants very low certainty in the evidence. We also encountered EO recommendations that could have been described as good practice statements (statements of action supported by a wealth of indirect evidence and little or no direct evidence, considered by clinicians as non-controversial and do not require evidence summaries).9 This suggests that guideline developers are often conflating EO and good practice statements. We were surprised to not find EO recommendations in which guideline developers relied on their own clinical practice and experience (ie, based the recommendation on their unsystematic observations in practice). However, since many EO recommendations did not provide cited evidence, it is plausible that many of these EOs were based on clinical observations of the experts in their own practice.

This study has also demonstrated that having a methodologist as a part of a guideline team was associated with increased rate of EO recommendations per guideline. This may signify that rating of the quality of evidence was either controversial, sparse or of low quality, which required increased use of experts’ input. The presence of a methodologist also increased the likelihood of EO recommendations citing evidence, which is expected and likely relates to the methodologist encouraging experts to be more transparent. We were unable to demonstrate a significant impact of financial conflicts on the use of EO; however, prior literature has demonstrated that experts’ opinion is affected by experts’ financial ties to industry.11


This study has demonstrated that most of the time EO is not an opinion; rather, it is indirect evidence or very low-quality evidence. We propose using the correct characterisation and using the well-established categories of quality of evidence (ie, high, moderate, low or very low).12 Such use is more accurate and transparent for decision makers and users of guidelines. We hypothesise that proper and explicit description of the quality of evidence may improve uptake of recommendations because guideline users may view EO recommendations as less rigorous or less compelling.

While some clinical experts demand to include EO as a source of evidence during guideline development,13 they define such opinion as:

‘…the opinions of experts are based not only on their personal clinical experiences, but also on their accumulated knowledge from a wide range of sources. These include the expert’s personal assessment of the validity of published reports, new knowledge learned at meetings and symposia, awareness of unpublished studies with “negative” results, and knowledge of the (often unreported) practice styles of colleagues in their field of expertise’.13

This definition clearly includes many types of evidence, as opposed to opinion. We propose that each type should be explicitly stated and appraised using clear labels and to avoid the vague terminology of opinion. This approach is congruent with the GRADE approach. GRADE specifically acknowledges that expertise is required for interpretation of any form of evidence but considers EO to be interpretation of evidence, but not a form of evidence in itself.12

When there is a lack of published evidence, one could aim to systematically summarise observations of experts and trends in their practice. This could be practically achieved by surveying the clinical experts, not about their opinion of what should be recommended, but rather about their observations of patients’ outcomes over the years. This approach has been used by some guidelines and is well received by the clinical community when there is no published evidence addressing the questions that underpin some recommendations.14 15

The quality of evidence supporting the majority of recommendations is usually low (compared with the number and diversity of guidance that clinicians seek). This low quality, along with the fact that added rigour can lead to strict inclusion criteria of evidence, can both result in a guideline that is out of context or lacking implementation details. Therefore, for a guideline to be useful, clinical experts are needed to contextualise evidence, extrapolate evidence from indirect sources and interpret low-quality evidence. In addition to using the correct label for evidence (ie, indirect evidence instead of EO), guidelines should provide an explicit explanation for how the experts were selected, how they reached consensus and what problems experts were asked to address (eg, to weigh in on strength and limitations of evidence, to contextualise evidence and recommendations, to assess and further inform the guideline through indirect evidence, to offer opinions based on their personal clinical experiences, or to enrich and support the guideline text with good practice statements).

Limitations and strengths

This study depended on judgements made by our investigators when assigning a rationale retrospectively to published text. In addition, appraising the quality of a study or the quality of evidence involves making a judgement.16 17 The rationale for EO was not provided in a large number of recommendations, limiting conclusions to guidelines that have provided such rationale. In some recommendations it was not clear whether the EO was a designation for the level of evidence or the strength of recommendations. This differentiation, which is a critical component of the GRADE system, was less clear in guidelines using other guideline-development approaches. Another limitation is that a large proportion of EO recommendations were developed by a single professional society. Lastly, the definition of good practice statement as a distinct type of recommendations is fairly recent and some guideline developers may not be aware of such designation.18

The strengths of this meta-epidemiological study relate to the comprehensive search and rigorous approach of reviewing studies by independent pairs of reviewers. To our knowledge, this is the first study that attempts to define EO used in guidelines.


Clinical experts are essential to provide context to the evidence and aid in its interpretation. Experts’ engagement in guideline development is particularly critical when evidence is being extrapolated from indirect sources. We suggest avoiding the use of the term expert opinion and replacing it with the appropriate standard terminology for describing the quality of evidence. This approach may improve the uptake of recommendations by improving clarity and transparency.


  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.


  • Contributors MHM conceived the study. OJP, NA-V, RS and KM selected studies and extracted data. RLM, SS, YF-Y, LJP, PD and RAM critically revised the manuscript and helped in interpretation. OJP and NA-V conducted analysis. MHM is the grantor of the work.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Linked Articles