Article Text
Abstract
A surprisingly huge proportion of medical research still shows poor quality in design, conduct and analysis, leading to far from optimal robustness of findings and validity of conclusions. Research waste remains a problem caused by a number of reasons. Asking the wrong research questions and ignoring the existing evidence are possible preventable ones. Evidence maps are tools that may aid in guiding clinical investigators and help in agenda setting of future research. In this article, we explain how they serve such a goal and outline the steps required to build effective evidence maps.
- evidence maps
- evidence synthesis
- reviews
- research questions
- research agenda
- research waste
- reporting
Statistics from Altmetric.com
As the late Altman put it in a highly cited 1994 paper titled ‘The scandal of poor medical research’, we need less research, better research and research for the right reasons.1 He explained how several factors stood behind the seriously flawed nature of medical research at the time. This included asking the wrong research questions, using inappropriate methodology and analysis, misinterpreting the findings, citing the literature selectively or making unjustified conclusions. Despite the many advances that were made in many fields, it is argued that the status quo has not changed much since then.2 More often than not, a combination of such phenomena is present, eventually leading to enormous resource waste with little tangible usefulness and positive impact on clinical practice. A surprisingly large proportion of medical research still shows poor quality in design, conduct and analysis, leading to far from optimal robustness of findings and validity of conclusions.3 In this exposition, we focus on setting a research agenda to ask the right questions and describe a tool called evidence map as a possible aid.
Asking the right research questions: evidence gap
Being ‘useful’ is a characteristic of clinical research that we cannot do without. Many fields of science can be solely driven by curiosity and can afford not to have a priori research aim. However, clinical research is not among them. The founding premise of conducting clinical research is that it will be useful in informing decision-making either by itself or in combination with other forms of evidence such as in systematic reviews, evidence synopses and other primary studies4 Asking the ‘right’, that is, necessary, research questions is an integral cornerstone of being able to produce valid and useful clinical research, yet a large proportion of published research lacks that quality.5 6 Journal editors’ enthusiasm for results opposing prior perceptions of evidence, a component of publication bias, provides an incentive for clinical investigators to focus on producing research aiming to arrive at such results, sometimes regardless of the validity of methods and analysis or of making sound conclusions.7 Efficiency of ways clinical research is started and conducted should be questioned when it does not address health problems relevant to populations or interventions and outcomes important to patients and physicians. With much clinical research being conducted with priorities other than those of patients and clinicians in mind,5 8 9 along with the misplacement of resources and research funding to serve such priorities,10 efficiency does indeed need to be questioned. Therefore, it would not be surprising to find that >85% of billions of dollars of investments in research funding in the world is wasted every year due to correctable problems.11 Such correctable problems include choosing to ask the wrong research questions, conducting poorly designed studies, failure to publish research findings and failure to report research properly and transparently.
Asking the necessary research questions: context in body of evidence
Similar to asking the wrong clinical research questions, asking unnecessary research questions also leads to poor efficiency and waste. New research endeavours need not be started if sufficient evidence is already available. This is a problem of being aware of what is already known about a topic prior to setting out to investigate it and it has been shown to be a prevalent one.12 Hill had asked the famous question every researcher needs to answer after yielding results from their investigations: ‘What does it mean, anyway?’.13 Yet, placing research results in the context of the already available evidence is not done in the majority of trials.11 This can be due to unawareness of the existing body of evidence, an unintentional mistake by the authors, or due to the selective citation of portions of the existing body of evidence.11 14 15 The vast majority of new research is not preceded or accompanied by reviews of bodies of literature (typically a brief systematic review).16 Additionally, although reporting guideline statements such as the Consolidated Standards of Reporting Trials (CONSORT) recommendations17 have emphasised the importance of providing the proper context in the discussion sections of published trials, the problem persists.18
A proposed aid: evidence maps
An evidence map is a tool that can assist in addressing these two aforementioned problems in clinical research and help set agendas for future research. One of the main definitions of evidence maps is that they are the systematic organisation and illustration of a broad field of research evidence with the intent to characterise the breadth, depth and methodology of relevant evidence and identify gaps.19 20 Other times, an evidence map was defined as an approach to providing a visual representation and critical assessment of evidence landscape for a particular topic or question.21 A more recent definition was drawn from the published evidence maps in the literature and found it to be a systematic search of a broad field to identify gaps in knowledge and future research needs.22 This definition takes evidence maps to be a user-friendly representation of bodies of evidence visually in a figure or graph, a table or a searchable database.
Basic evidence maps
Interest in evidence mapping arose when network meta-analysis had become a popular method to synthesise comparative effectiveness evidence of multiple comparisons. Network meta-analyses use bubble graphs in which interventions are represented by nodes or circles and lines connecting these nodes show the available head to head trials comparing the interventions. The thickness of the lines reflects the number of available studies and the size of the circles is proportional to the number of patients receiving each intervention. This bubble graph was one of the simple forms of evidence maps. Studying the geometry of these networks reveals important information about certain diseases. For example, which neglected tropical diseases are well studied and which require future research.23 In a second example, the geometry of clinical trials in pulmonary hypertension suggests that out of 75 available trials, only 3 did not use placebo as a comparator.24 Studying network geometry across various conditions suggests that certain comparisons have been avoided by researchers or research sponsors.25
More sophisticated evidence maps
Additional information beyond what is contained in a network meta-analysis map is needed to better inform the research agenda. In addition to the number of available interventions and how they have been compared, it is critical for research sponsors to know the quality of the available evidence. In a simplified approach, the risk of bias in individual studies can be added to the map. In a more time-consuming but more informative approach, the certainty warranted by the body of evidence can be added. An example was about the comparative effectiveness of non-pharmacological treatments of depression (cognitive behavioural therapy, naturopathic therapy, biological interventions and physical activity interventions). In a study recently published in BMJ EBM, an evidence map of 367 randomised controlled trials enrolling approximately 20 000 patients treated with 11 treatments clearly has shown which treatments are supported by evidence warranting highest certainty and which comparisons should be the topic for future research.26
An evidence map may communicate other information important to stakeholders such as health disparities, geographic locations, and representation of characteristics or morbidities of populations studied in the published literature.
Developing the map
Several methods of mapping biomedical evidence exist.21 22 26 The steps involved in creating an evidence map are summarised in the online supplementary box 1. In brief, stakeholders (researchers, funders, policy-makers and, most importantly, patients) determine the clinical questions of the highest importance to their context, a systematic review or overview of systematic reviews can elicit the body of available evidence. Certainty or quality of the evidence27 is determined and conveyed to stakeholders. The final step consists of the visual depiction of data elements most relevant to the stakeholder, for example, focusing on which comparisons are made and which are avoided, populations studied or avoided, size of the body of evidence, risk of bias, certainty or other factors.
Supplemental material
Hypothetical examples are shown in figures 1–3. Such maps can be created using a number of available software, including Microsoft PowerPoint and Excel, Photoshop or more specialised software such as VOS Viewer or CitNetExplorer. It is advised to avoid trying to summarise too many parameters and constructs in one map to keep evidence maps easily understandable and intuitive. This body of evidence can then be subjected to the GRADE (Grading of Recommendations, Assessment, Development and Evaluations) framework.
The inclusion of an evidence map in research proposals and protocols helps convey the value and importance of the intended research study to stakeholders as well as its potential to fill any gaps in knowledge. Including evidence maps in final manuscripts and published papers is also useful to users of evidence to convey the same concepts as well as to place the research findings in the context of what is already known on the topic.
Conclusion
Research waste remains a problem caused by a number of reasons. Asking the wrong research questions and ignoring the existing evidence are possible preventable reasons. Evidence maps are tools that may aid in guiding clinical investigators and help in agenda setting of future research.
References
Footnotes
Contributors FA: conceived the idea of this article, drafted the manuscript, designed the figures and approved the final version. MHM: reviewed the manuscript, provided critical feedback and approved the final version.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
Patient consent for publication Not required.