I read this scoping review with great interest. In linguistics there is a term for the phenomenon that a word can have the capacity to have multiple related meanings in different contexts. This term is polysemy. In the context of defining "Evidence-based medicine". If we want to understand each other, it proved successful to define the words we use .Sackett et al. in their seminal article about" EBM what it is and what it isn't" also defined what they mean by "evidence", namely: "By best available external clinical evidence we mean clinically relevant research,often from the basic sciences of medicine, but especially from patient centred clinical research into the accuracy and precision of diagnostic tests (including the clinical examination), the power of prognostic markers, and the efficacy and safety of therapeutic, rehabilitative, and preventive regimens."
Doesn't this render it irrelevant for the context of EBM if others define differently in other contexts? A bigger problem arises when the term evidence is misunderstood or even misused within the context of EBM. This problem might be approached by better teaching of the meaning originally meant. Evidence to my knowledge (as a non-english native speaker) is a juridical term and thus different to proof or fact. Juridically only the sum of evidence is leading to a court decision. And, this decision can be wrong especially if new evidence arises.
The article by Alastair Munro and colleagues (to which this is a rapid response) is of great value and importance, as we can now see, since this article was published, that there has been considerable discussion and interrogation by the COVID inquiry (https://covid19.public-inquiry.uk/), about whether the closure of schools, ordered on 18th March 2020 by Boris Johnson's government, and put into effect two days later on 20th March, was ever necessary.
The rationale for the closure of schools was that it was imperative that the transmission rate of the COVID-19 pandemic, then rapidly spreading in the UK, was brought down at pace. Only three days later, a full, mandated, "lock down" was imposed. This was in the context of a suite of increasingly draconian measures to impose Non-Pharmaceutical Interventions (NPIs) on an increasingly fretful population - on March 12th, 16th, 20th and 23rd.
The question we pose of the authors of the current paper under discussion is a simple one. What is the actual, real world, evidence that the 'R' value fell as a result of the closure of schools? Is there any real-world, UK-based, 2020 information we can use to answer this question. This question cannot remotely be answered by reference to other countries, or, to mathematical models. The impact of school closures must also be analysed independently of all of the other contemporaneous NPIs.
The article by Alastair Munro and colleagues (to which this is a rapid response) is of great value and importance, as we can now see, since this article was published, that there has been considerable discussion and interrogation by the COVID inquiry (https://covid19.public-inquiry.uk/), about whether the closure of schools, ordered on 18th March 2020 by Boris Johnson's government, and put into effect two days later on 20th March, was ever necessary.
The rationale for the closure of schools was that it was imperative that the transmission rate of the COVID-19 pandemic, then rapidly spreading in the UK, was brought down at pace. Only three days later, a full, mandated, "lock down" was imposed. This was in the context of a suite of increasingly draconian measures to impose Non-Pharmaceutical Interventions (NPIs) on an increasingly fretful population - on March 12th, 16th, 20th and 23rd.
The question we pose of the authors of the current paper under discussion is a simple one. What is the actual, real world, evidence that the 'R' value fell as a result of the closure of schools? Is there any real-world, UK-based, 2020 information we can use to answer this question. This question cannot remotely be answered by reference to other countries, or, to mathematical models. The impact of school closures must also be analysed independently of all of the other contemporaneous NPIs.
If the opportunity to test the hypothesis that schools were an important source of inter-generational COVID viral spread were not to have been taken, this would be a tragic lost piece of knowledge which could otherwise have informed the best approach to take for future pandemics in the UK. It is our belief that school closures were a part of the pandemic influenza planning, which it now seems was the only pandemic-related planning that had ever occurred in the UK. COVID-19 as we all well know was much less likely to harm children than pandemic influenza. So the medical benefits to children from school closures would have been minimal, possibly nil, while the societal harms caused by school closures were very significant and long-lasting.
In our paper ( https://doi.org/10.18103/mra.v11i11.4652) we made a comparison with Sweden, where the withdrawal of in-person education for children aged 15 or less was avoided (except for when teachers were unwell and absent). There is much less evidence of educational and psychological damage to children in Sweden (though detailed formal comparisons are naturally very difficult to conduct in retrospect).
The authors carefully and professionally detail in their work the significant and long-lasting harms that school closures brought to children in the UK. These impacts are still being felt today, late in 2023. It is important to glean whether the sacrifices that children were forced to make in 2020 and 2021 actually resulted in any net societal gain, which most likely would be seen in the segment of the population aged over 60 (in which around 90% of hospitalisation and deaths occurred).
As French Nuclear Medicine representatives, we read with great interest the article by Le Guludec et al. entitled: Rapid access to innovative medicinal products while ensuring relevant health technology assessment: Position of the French National Authority for Health. In this interesting and important position paper from the French Independent Health Technology body (HTAb) called “Haute Autorité de Santé” (HAS), the authors state that its recommendations derive from consultations with academics. Although we understand that accessibility to innovative drugs used for Positron Emission Tomography (PET) could be considered as a very ancillary issue by the HAS board that authored the paper, these PET imaging molecules are still considered as medicinal products from a regulatory standpoint and should be evaluated as such.
We regret that, to our knowledge, none of the academic members of the French Nuclear Medicine Society (SFMN) board were given the opportunity to draw attention to some of the specific features of the drugs commonly used in nuclear medicine by answering the questionnaire sent to the panelists (cf supplemental material). Indeed, we would clearly have answered “Yes” to the following questions:
- Are there specific methodological issues for Health Technology Assessment you wish to bring to our attention?
- Do you identify methodological issues relative to the assessment of innovative drugs in specific therapeutic areas?
We would also have be...
As French Nuclear Medicine representatives, we read with great interest the article by Le Guludec et al. entitled: Rapid access to innovative medicinal products while ensuring relevant health technology assessment: Position of the French National Authority for Health. In this interesting and important position paper from the French Independent Health Technology body (HTAb) called “Haute Autorité de Santé” (HAS), the authors state that its recommendations derive from consultations with academics. Although we understand that accessibility to innovative drugs used for Positron Emission Tomography (PET) could be considered as a very ancillary issue by the HAS board that authored the paper, these PET imaging molecules are still considered as medicinal products from a regulatory standpoint and should be evaluated as such.
We regret that, to our knowledge, none of the academic members of the French Nuclear Medicine Society (SFMN) board were given the opportunity to draw attention to some of the specific features of the drugs commonly used in nuclear medicine by answering the questionnaire sent to the panelists (cf supplemental material). Indeed, we would clearly have answered “Yes” to the following questions:
- Are there specific methodological issues for Health Technology Assessment you wish to bring to our attention?
- Do you identify methodological issues relative to the assessment of innovative drugs in specific therapeutic areas?
We would also have been grateful to have been given the chance to present the opinion of the SFMN regarding the following two key questions and the SWAT analysis proposed by the HAS:
- what could be useful clinical trial designs or methodological specificities to accelerate the access to innovative drugs?
- how the HAS could modify its methodology for Health Technology Assessment?
In addition to the opinion of the SFMN, we would like to bring to the authors’ and your readers’ attention the Pipame (1) and the Court of Auditors reports (2) , which stressed many years ago that access to medical imaging innovations is of tremendous importance for patients. They both concluded that overcoming ‘administrative’ hurdles regarding evaluation and reimbursement could facilitate access for patients and give a boost to French small-to-medium size enterprises (SMEs) by reducing the time to market. Indeed, it is common knowledge that until recently, French patients had to go abroad to get access to PET imaging for prostate cancer or Neuro-Endocrine Tumors (NETs), and that contributions to innovations by French SMEs in this domain are still few and far between.
Nuclear medicine uses radiolabeled drugs for targeted irradiation to treat malignant diseases. The SFMN is grateful to the Ministry of Health for allowing our patients to access molecular radiotherapy rapidly and for compassionate use. Even though it does not meet the HAS definition of targeted therapies stricto sensu, molecular radionuclide therapy is by nature a targeted therapy whose indications rely indisputably on PET imaging companion markers, hence the concept of ‘theranostics’. However, most radiolabeled drugs are used for diagnostic purposes, and the number of new PET procedures per year is sky-rocketing. (https://www.cnp-mn.fr/wp-content/uploads/2023/01/2021_Enquete-Nationale-... )
We would particularly like to stress that blinded randomized controlled trials (RCT), which are claimed to be the cornerstone of drug efficacy and toxicity assessment, are not relevant for assessing molecular imaging drugs for reasons of practicality and cost-effectiveness. In our opinion, the lack of appropriate key performance indicators for PET drugs has for many years been the white elephant in the HAS medical imaging room.
We were reassured to read that the HAS acknowledges that conditions may exist that make conducting RCT unreasonable. However, assessing the performance of medical imaging has nothing to do with any hypothetical deductive approaches deemed to benefit from this RCT exemption. PET imaging has entered an era of continuous progress in which mathematics and physics are giving rise to reliable metrics such as spatial resolution, detection sensitivity, and dosimetry. Thanks to histopathologic verification, it is possible to calculate diagnostic performance and likelihood ratios (LR) even in small cohorts. Not using these assessments, which are based on evidence that has a high level of certainty, has led to bizarre ranking decisions (added medical value; i.e Amélioration du Service Medical Rendu: ASMR V). In turn, this has led to French patients being hindered in their access to PET procedures that are performed daily in neighboring countries.
For example, take the ‘clinical case’ of 68Ga edotreotide, a radiopharmaceutical thought to be innovative in France while being used in standard practice elsewhere for the staging and follow up of NETs. The HAS concluded that 68Ga edotreotide has high clinical value (Service Médical Rendu ; SMR ) but that its ASMR is poor (Amelioration du service medical rendu ASMR rank = V) https://www.has sante.fr/portail/jcms/c_2758095/fr/iasotoc 05042017 avis ct15806) because of the lack of a clinically relevant comparator. Therefore, the HAS considered that somatostatin receptor scintigraphy with 111InOctreotide should be considered as the relevant comparator, even though it is no longer in use in many countries because of its notoriously poor performance. Therefore, the HAS recommended using 111InOctreotide upfront. Almost the same is happening regarding the use of 68GaPSMA for prostate cancer.
However,…
- Patient irradiation was not taken into consideration: 111In Octreotide delivered a 2.4-higher dose to the patient than 68Gaedotreotide, so complying with the HAS recommendation raised an ethical issue.
- 111In Octreotide scintigraphy requires patients to come twice to the hospital, so do we need any prospective evaluation to conclude that a 2-day procedure (historical comparator) is more expensive than a single-day procedure, whatever the impact on patient management?
- The likelihood ratio is about 12 and 6 for 68Gaédotréotide and 111InOctrotide, respectively (i.e. the risk of a false negative is 2-fold higher for a NET patient undergoing 111InOctrotide scintigraphy). Again the clinical benefit can be directly derived from these metrics.
The added clinical value was also considered poor because of the lack of evidence on patient management. This is a substantial methodological shift far beyond comparing diagnostic accuracy with a relevant comparator. This implies a much more ambitious methodology with the risk of an expensive trial. Considering the time spent to initiate, conduct and analyze such an RCT, it is more likely that the treatment used would be considered obsolete by the time of publication, if any. Moreover, the cost effectiveness of such an approach remains questionable. Numm et al estimate that for a cost of 100US$, profitability is reached at around 1 million doses per year (3; 4). Such a massive use of PET tracers costing more than 1000 US$ is simply not sustainable. Based on the De Massi (5, 6) et al and Numm (3; 4) et al analysis of the radiopharmaceutical market, it is very unlikely that an SME could spend around 200 M US$ in developing a PET drug, especially since PET procedures are reimbursed independently (at least in France) of the cost of the molecule, whatever its ASMR.
However, some years ago the HAS came up with an interesting alternative (HAS NOTE DE CADRAGE Place de la technique du ganglion sentinelle dans la stratégie diagnostique de l’envahissement ganglionnaire d’un cancer du sein à un stade précoce sept 2011 www.has-sante.fr). Considering that the predictive values of a test can be easily derived from the LR and the pretest probabilities (6) (i.e. prevalence, TNM stage), the HAS demonstrated that cost-effectiveness studies can be used to determine the value of sentinel node scintigraphy for breast cancer surgery. Therefore, the same methodology could be easily applied to pragmatic trials or real-world cohorts of PET patients. The SFMN has created a real-registry that provides more reliable data than the PMSI, as it also includes outpatients and more detailed data than that collected in early or compassionate use programs. The theragnostic market is expected to grow from 4BUS$ in 2013 to 14BUS$ in 2025 : Richard Zimmerman, Oncidium Foundation
We agree with the HAS that one should avoid overusing the term ‘innovation’ when discussing any drug candidate. However, nuclear medicine has come a long way since 2007 when Adrian Nunn pointed out the fact the approval of radiopharmaceuticals was at an all-time low, with all the major radiopharmaceutical agents in use having been approved over 10 years before. Recent successes such as the Food and Drug Administration approval of Lutathera and NETSPOT have resulted in an increasing number of pharmaceutical companies pursing theragnostics, with further impetus provided by the purchase of Advanced Accelerator Applications by Novartis for 3.9 billion and Endocyte, Inc for 2.1 billion (8).
Times are changing; it’s time to rethink matters collectively in the quest to discover a more efficient and agile way to assess medical imaging innovations.
We remain at the disposal of the HAS to continue this discussion.
Référence
1. Pôle interministériel de Prospective et d'Anticipation des Mutations économiques (PIPAME)http://competitivite.gouv.fr/documents/commun/Documentation_poles/etudes...
2. Cours des comptes L’imagerie médicale –mai 2016 Cour des comptes -www.ccomptes.fr-@Courdescomptes
3. Nunn A D The Cost of Bringing a Radiopharmaceutical to the Patient’s Bedside J. Nucl Med 2007;48 : 169
4. Nunn A D The cost of developing imaging agents for routine clinical use Invest Radiol 2006; 41 : 206 2012
5. DiMasi, J.A., R.W. Hansen, et H.G. Grabowski (2003), The price of innovation: New estimates of Drug Development Costs, Journal of Health Economics 22, 151-185.
6. DiMasi, J.A., R.W., Hansen, H.G. Grabowski, et L. Lasagna (1991), Cost of innovation in the pharmaceutical industry, Journal of Health Economics 10, 107-142.
7. Fagan. Letter: Nomogram for Bayes theorem N Engl J Med 1975; 293:25
8. Cathy S. Cutler, PhD Economics of New Molecular Targeted Personalized Radiopharmaceuticals Semin Nucl Med 2019 49:450-457
I am writing to request further clarification on the paper “Likelihood ratio interpretation of the relative risk”. The “key messages” section of this paper states that the study adds the following to the literature:
⇒ It is demonstrated that the conventional interpretation of the relative risk is in conflict with Bayes’ theorem.
⇒ The interpretation of the relative risk as a likelihood ratio connecting prior (unconditional) intervention risk to outcome conditional intervention risk is required to avoid conflict with Bayes’ Theorem
I will refer to the first bullet point as “Doi’s Conjecture”. Doi’s Conjecture is also stated in the second section of the main text, where it is claimed that “the usual interpretation (33% increase in the +ve outcome under treatment) contravenes Bayes Theorem”.
No attempt is made within the text to prove Doi’s Conjecture. But perhaps more worryingly, no attempt is made to define the term “interpretation”, a term which is not defined in standard probability theory. The meaning of Doi’s Conjecture is therefore at best ambiguous. Moreover, the manuscript relies substantially on claims about how effect measures are “perceived”, another term which is defined neither in probability theory not in the manuscript.
The relative risk is defined as the risk of the outcome under treatment, divided by the risk of the outcome under the control condition; that is, as a ratio of two probabilities. Thi...
I am writing to request further clarification on the paper “Likelihood ratio interpretation of the relative risk”. The “key messages” section of this paper states that the study adds the following to the literature:
⇒ It is demonstrated that the conventional interpretation of the relative risk is in conflict with Bayes’ theorem.
⇒ The interpretation of the relative risk as a likelihood ratio connecting prior (unconditional) intervention risk to outcome conditional intervention risk is required to avoid conflict with Bayes’ Theorem
I will refer to the first bullet point as “Doi’s Conjecture”. Doi’s Conjecture is also stated in the second section of the main text, where it is claimed that “the usual interpretation (33% increase in the +ve outcome under treatment) contravenes Bayes Theorem”.
No attempt is made within the text to prove Doi’s Conjecture. But perhaps more worryingly, no attempt is made to define the term “interpretation”, a term which is not defined in standard probability theory. The meaning of Doi’s Conjecture is therefore at best ambiguous. Moreover, the manuscript relies substantially on claims about how effect measures are “perceived”, another term which is defined neither in probability theory not in the manuscript.
The relative risk is defined as the risk of the outcome under treatment, divided by the risk of the outcome under the control condition; that is, as a ratio of two probabilities. This manuscript appears to claim that “interpreting” the relative risk in the manner that it is defined, is inconsistent with Bayes’ Theorem, a fundamental result in probability theory. If this is true, probability theory is in deep conceptual trouble.
There are multiple correct (and mathematically equivalent) ways to represent effects within a study, to predict risk under treatment and to determine the posterior probability. This manuscript provides no coherent reason for one valid approach to take precedence over another valid approach.
Title: “Claims about the main claim”
Author: Suhail A, Doi, Polychronis, Kostoulas, Paul, Glasziou
In response to the published article "Likelihood ratio interpretation of the relative risk"
Rapid response :
September 16, 2022
The problem in evidence-based medicine arises when we port relative risks derived from one study to settings with different baseline risks. For example, a baseline risk of 0.2 and treated risk of 0.4 for an event in a trial gives a RR of 2 (0.4/0.2) and the complementary cRR of 0.75 (0.6/0.8). Thus the ratio of LRs (RR/cRR) is 2/0.75 = 2.67. If applied to a baseline risk of 0.5 the predicted risk under treatment with the RR “interpretation” is 1.0 but with the ratio of LRs “interpretation” is 0.73. Here, the interpretation of the risk ratio as a likelihood ratio, using Bayes’ theorem, clearly gives different results, and solves the problem of impossible risks as clearly depicted in the manuscript and the example.
If, in our effort to highlight the need of this correct interpretation, we have used strong wording that annoyed the commentator we feel the need to express regret. We hope that the commentator could also feel similarly for his scientifically unbecoming choice of wording that culminated with “Doi’s Conjecture”.
Conflict of Interest
None declared
I would like to congratulate Dr. Abbott and her team in generating this piece of important and interesting article, which applied the methods of meta-science to the early systematic review articles and the infodemics related to COVID-19.
Indeed, the COVID-19 pandemic came quick and ferocious, starting early 2020 and lasted till recently and with new possible variants emerging, it still presents the medical community and indeed the scientific circles with challenging question. Thankfully to the selfless work of researchers, patients and frontline medical staffs, we now have some valuable means to deal with this Pandemic.
The research community was presented with a rather challenging task of designing and conducting researches to answer important questions relating to the new infectious diseases at the time of early 2020. The “new” corona virus was ravaging parts of our world without checks. So studies were conducted at pace, which unfortunately resulted in much duplicated and poor methodological studies conducted. But on the other hand, the sheer volume of studies itself may be useful as it generated evidence to inform us of what does and what does not work in terms combating the COVID-19. For example, dexamethasone (RECOVERY trial) was found to be essential for severe COVID-19 patients and the use of Hydroxychloroquine is ineffective for COVID-19.
Having said this, I must state that I am not in support of the generation of poor quality clinical studi...
I would like to congratulate Dr. Abbott and her team in generating this piece of important and interesting article, which applied the methods of meta-science to the early systematic review articles and the infodemics related to COVID-19.
Indeed, the COVID-19 pandemic came quick and ferocious, starting early 2020 and lasted till recently and with new possible variants emerging, it still presents the medical community and indeed the scientific circles with challenging question. Thankfully to the selfless work of researchers, patients and frontline medical staffs, we now have some valuable means to deal with this Pandemic.
The research community was presented with a rather challenging task of designing and conducting researches to answer important questions relating to the new infectious diseases at the time of early 2020. The “new” corona virus was ravaging parts of our world without checks. So studies were conducted at pace, which unfortunately resulted in much duplicated and poor methodological studies conducted. But on the other hand, the sheer volume of studies itself may be useful as it generated evidence to inform us of what does and what does not work in terms combating the COVID-19. For example, dexamethasone (RECOVERY trial) was found to be essential for severe COVID-19 patients and the use of Hydroxychloroquine is ineffective for COVID-19.
Having said this, I must state that I am not in support of the generation of poor quality clinical studies, but what I am saying is that different clinical / public health circumstances may call for different overall research strategies, sometimes we need quick answers to important clinical questions, then unfortunately, quality is often sacrificed/overlooked for the sake of the generation of "rapid" evidence, and the early phase of the COVID-19 pandemic may fall into this category. A relative lake of research capacity and resources may also contributed to this unfortunate phenomenon as we just do not have the means to generate massive amount of high quality researches in a short period of time.
It is my view that medical research can perhaps be broadly divided into four categories:
1). Very Important and urgent: such as the COVID-19 or any past or future public health emergencies
2). Important but not urgent: hypertension, diabetes etc.
3). Not as important but urgent: diagnosis or care of rare genetic diseases etc.
4). Least important and not urgent: patient flow, drug compliance issues etc.
I would concur with Dr. Abbott and her team and both the authors and the editors of the scientific community share the "duty" of ensuring and also improve the quality of systematic reviews. As an educator, we are equally responsible for providing future clinicians and scholars with the knowledge and skills to conduct high quality studies. Furthermore, meta-researches are as important to provide us with a scoping view of the evidences of review articles and to remind us of the need of research vigilance. Lastly it is perhaps also the responsibility of the policy makers and governments to ensure the persistent and appropriate resources are channeled into the research community, especially in times of future public health emergencies.
Dear Editor,
This response is in relation to the titled article above published in June 2019. Firstly, I would like to commend the outstanding work of research done. While reading the article, I understood the correlation between the nursing field, evidence-based research, and ways in which patients benefit from current health practices. Furthermore, the research conducted a wide range of research benefits in other nursing career paths globally. It showed experts views on teaching evidenced based prospectus, evidence-based deliberations, and stakeholders’ engagement which can impact patients involved. I agree with the study conducted and how research is essential for future advancements as well as improvements in care to patients. Unfortunately, there aren’t as much published research work in The Bahamas on evidence-based practices from an expert view. Through further research this thesis can become widespread to obtain more views on this pressing matter.
We fully agree that „non-publication of trial results and selective outcome reporting…is not a phenomenon that is limited to homeopathy.”
Previous reviews in conventional medicine, such as the study by Kosa et al. in 2018, report „…substantive disagreement in reporting between publications and current clinical trial registry, which were associated with several study characteristics”.[1]
In 2019 The Lancet commented on the reporting of clinical trial data for 30 European universities that sponsor the largest number of trials governed by EU clinical trials regulation: “The report shows that 778 (83%) of 940 clinical trials sponsored by these universities due to post their results on the EU Clinical trials Register (EudraCT) had not done so”.[2]
The International Committee of Medical Journal Editors (ICMJE) announced in 2005 that “… trials that begin enrolment of patients after 1 July 2005 must register in a public trials registry at or before the onset of enrolment to be considered for publication …”.[3] EU rules took effect in 2014, which require all clinical trials registered in EudraCT to post summary results within 12 months of study completion.[2] Hence, the inclusion of studies on homeopathy published before and in 2005 by Gartlehner et al. 2022 does not seem reasonable respectively of those published before and in 2014 is debatable.
Notwithstanding the above, precise information on sub-groups of studies was not given by Gartlehner et al. 202...
We fully agree that „non-publication of trial results and selective outcome reporting…is not a phenomenon that is limited to homeopathy.”
Previous reviews in conventional medicine, such as the study by Kosa et al. in 2018, report „…substantive disagreement in reporting between publications and current clinical trial registry, which were associated with several study characteristics”.[1]
In 2019 The Lancet commented on the reporting of clinical trial data for 30 European universities that sponsor the largest number of trials governed by EU clinical trials regulation: “The report shows that 778 (83%) of 940 clinical trials sponsored by these universities due to post their results on the EU Clinical trials Register (EudraCT) had not done so”.[2]
The International Committee of Medical Journal Editors (ICMJE) announced in 2005 that “… trials that begin enrolment of patients after 1 July 2005 must register in a public trials registry at or before the onset of enrolment to be considered for publication …”.[3] EU rules took effect in 2014, which require all clinical trials registered in EudraCT to post summary results within 12 months of study completion.[2] Hence, the inclusion of studies on homeopathy published before and in 2005 by Gartlehner et al. 2022 does not seem reasonable respectively of those published before and in 2014 is debatable.
Notwithstanding the above, precise information on sub-groups of studies was not given by Gartlehner et al. 2022. Hence, the conclusion, “This likely affects the validity of the body of evidence of homeopathic literature and may overestimate the true treatment effect of homeopathic remedies”, based on those studies with modified or switched primary outcome measure or the point of time of assessment, is not adequately justified.
Previous studies published in the BMJ which looked at reporting bias in all medical fields showed that (a) half of all registered clinical trials in conventional medicine fail to report their results within a 12 month period, whereas according to Gartlehner et al. 2022 62% of all registered homeopathy trials reach publication,[4] and that (b) inconsistencies in reporting of primary outcome occur in 43% of conventional medical studies, whilst according to Gartlehner et al. 2022 this occurs in only 25% of published homeopathy trials.[5]
Hence, the most interesting finding is that homeopathy is out-performing conventional medicine in this respect, with lower levels of reporting bias.
References:
1. Kosa SD, Mbuagbaw L, Debono VB, et al. Agreement in reporting between trial publications and current clinical trial registry in high impact journals: a methological review. Contemp Clin Trials 2018;65:144-150. https://pubmed.ncbi.nlm.nih.gov/29287666/
2. Clinical trial registry reporting: a transparent solution needed. Lancet Oncol 2019; 20:741. https://www.thelancet.com/action/showPdf?pii=S1470-2045%2819%2930350-X
3. Kamran A. Compulsory registration of clinical trials. BMJ 2004;329:637. https://www.bmj.com/content/329/7467/637
4. Goldacre B, DeVito NJ, Heneghan C, et al. Compliance with requirement to report results on the EU Clinical Trials Register: cohort study and web resource. BMJ 2018;362:k3218. https://pubmed.ncbi.nlm.nih.gov/30209058/
5. Shah K, Egan G, Huan LN, et al. Outcome reporting bias in Cochrane systematic reviews: a cross-sectional analysis. BMJ Open 2020;10(3):e032497. https://pubmed.ncbi.nlm.nih.gov/32184303/
Michael Frass, M.D.
Em. Professor of Medicine
Medical University of Vienna,
Vienna, Austria
Menachem Oberbaum, MD, FFHom (Lond.)
The Center for Integrative Complementary Medicine
Shaare Zedek Medical Center
Jerusalem, Israel
In their recent paper, Gartlehner et al [1] reached the headline conclusion that ‘effect estimates of meta-analyses of homeopathy trials might substantially overestimate the true treatment effect of homeopathic remedies’. Their conclusion is based on having re-analysed one of the systematic review papers’ data published by Mathie et al [2] by taking into account the possible impact of a trial’s registration status. Gartlehner et al analysed a sub-set of 19 trials of non-individualised homeopathic treatment, comparing 6 trials that were registered with 13 trials that were not registered. They observed a statistically significant difference between homeopathy and placebo only for the non-registered trials; however, the difference in effect sizes between registered and non-registered trials did not reach statistical significance.
In conducting their re-analysis, Gartlehner et al have failed to recognise that the meta-analysis by Mathie et al [2] was primarily based on a sensitivity analysis of trials that comprised reliable evidence (effectively, low risk of bias): the effect-size estimate collectively for those 3 trials yielded a statistically non-significant result. Those 3 trials are amongst the 6 registered trials in Gartlehner’s re-analysis, and so it is no surprise that they contributed to a non-significant pooled effect size. A majority of the other 13 trials, now defined as non-registered [1], had previously been categorised by Mathie et al as high risk of bias...
In their recent paper, Gartlehner et al [1] reached the headline conclusion that ‘effect estimates of meta-analyses of homeopathy trials might substantially overestimate the true treatment effect of homeopathic remedies’. Their conclusion is based on having re-analysed one of the systematic review papers’ data published by Mathie et al [2] by taking into account the possible impact of a trial’s registration status. Gartlehner et al analysed a sub-set of 19 trials of non-individualised homeopathic treatment, comparing 6 trials that were registered with 13 trials that were not registered. They observed a statistically significant difference between homeopathy and placebo only for the non-registered trials; however, the difference in effect sizes between registered and non-registered trials did not reach statistical significance.
In conducting their re-analysis, Gartlehner et al have failed to recognise that the meta-analysis by Mathie et al [2] was primarily based on a sensitivity analysis of trials that comprised reliable evidence (effectively, low risk of bias): the effect-size estimate collectively for those 3 trials yielded a statistically non-significant result. Those 3 trials are amongst the 6 registered trials in Gartlehner’s re-analysis, and so it is no surprise that they contributed to a non-significant pooled effect size. A majority of the other 13 trials, now defined as non-registered [1], had previously been categorised by Mathie et al as high risk of bias [2]. If non-registration contributes to higher risk of bias, producing an overestimated effect size, then Gartlehner et al have discovered nothing new, and it is misleading that they have portrayed the Mathie et al paper [2] as one that could ‘substantially overestimate the true treatment effect of homeopathic remedies’ because it did not reflect the registration status of its analysed trials.
Moreover, the findings of another systematic review by Mathie et al (on clinical trials of individualised homeopathic treatment [3]) also remain unaffected by the Gartlehner et al paper. For the three trials with reliable evidence in that review, sensitivity analysis revealed a small but statistically significant effect size favouring homeopathy [3]. Gartlehner et al were unable to re-analyse these data effectively because of a paucity of registered trials – an observation that is also not surprising, since only 3 of 22 of those homeopathy trials were published after 2007, a year in which fewer than 40% of published trials in conventional medicine were registered [4]. Thus the headline findings from the two systematic reviews and meta-analyses published by Mathie et al [2, 3] stand intact.
References
1. Gartlehner G, Emprechtinger R, Hackl M, et al. Assessing the magnitude of reporting bias in trials of
homeopathy: a cross-sectional study and meta-analysis BMJ Evidence-Based Medicine. Epub ahead of print: 15 March 2022. doi:10.1136/bmjebm-2021-111846.
2. Mathie RT, Ramparsad N, Legg LA, et al. Randomised, double-blind, placebo-controlled trials of non-individualised homeopathic treatment: systematic review and meta-analysis. Syst Rev 2017;6:63.
3. Mathie RT, Lloyd SM, Legg LA, et al. Randomised placebo-controlled trials of individualised homeopathic treatment: systematic review and meta-analysis. Syst Rev 2014;3:142.
4. Wong EKC, Lachance CC, Page J, et al. Selective reporting bias in randomised controlled trials from two network meta-analyses: comparison of clinical trial registrations and their respective publications. BMJ Open 2019;9:e031138.
The new study by Gartlehner et al. (1) claims that the benefits of homeopathy may have been over-estimated due to high levels of reporting bias. However, as this problem is well-known to affect all areas of medical research, context is everything.
Although the authors state that, “non-publication of trial results and selective outcome reporting …. is not a phenomenon that is limited to homeopathy”, they failed to provide adequate context for their results by making any direct comparison to other areas of clinical research. Homeopathy is arguably out-performing conventional medicine, or, at the very least, has comparable levels of reporting bias. Using representative examples of high-impact studies on reporting bias across all medical fields, when compared with the data presented by Gartlehner et al.(1) it is clear that:
1) half of all registered clinical trials (2) in conventional medicine fail to report their results within 12 months; whereas 62% of all registered homeopathy trials reach publication, and
2) inconsistencies in reporting of primary outcome (3) occur in 43% of conventional medical studies; whilst this happens in only 25% of published homeopathy trials.
The potential impact of unregistered/unpublished results on estimates of treatment effects is well known (4), yet for homeopathy, according to Gartlehner et al.(1), the impact may be minimal, or nothing at all: “the difference in effect sizes between registered and unregistered stud...
The new study by Gartlehner et al. (1) claims that the benefits of homeopathy may have been over-estimated due to high levels of reporting bias. However, as this problem is well-known to affect all areas of medical research, context is everything.
Although the authors state that, “non-publication of trial results and selective outcome reporting …. is not a phenomenon that is limited to homeopathy”, they failed to provide adequate context for their results by making any direct comparison to other areas of clinical research. Homeopathy is arguably out-performing conventional medicine, or, at the very least, has comparable levels of reporting bias. Using representative examples of high-impact studies on reporting bias across all medical fields, when compared with the data presented by Gartlehner et al.(1) it is clear that:
1) half of all registered clinical trials (2) in conventional medicine fail to report their results within 12 months; whereas 62% of all registered homeopathy trials reach publication, and
2) inconsistencies in reporting of primary outcome (3) occur in 43% of conventional medical studies; whilst this happens in only 25% of published homeopathy trials.
The potential impact of unregistered/unpublished results on estimates of treatment effects is well known (4), yet for homeopathy, according to Gartlehner et al.(1), the impact may be minimal, or nothing at all: “the difference in effect sizes between registered and unregistered studies did not reach statistical significance”. Therefore, it is surprising that the authors claim that Dr Mathie’s “landmark meta-analyses”, used as the starting point for their analysis, “might substantially overestimate the true treatment effect of homeopathic remedies and need to be interpreted cautiously”. A thorough examination of their study reveals that their data do not support this claim.
While attempts have been made to use this new study to undermine the evidence base in homeopathy, claiming “poor research practice” (5), such claims are entirely unfounded. Reporting bias occurs in all areas of medical research, so, unsurprisingly, it occurs in homeopathy research too. Contrary to these authors’ claims, the clinical evidence base in homeopathy does not need more “cautious interpretation” than any other scientific evidence.
1. Gartlehner G et al. Assessing the magnitude of reporting bias in trials of homeopathy: a cross-sectional study and meta-analysis. BMJ Evidence-Based Medicine, 2022; eFirst
2. Goldacre B et al. Compliance with requirement to report results on the EU Clinical Trials Register: cohort study and web resource. BMJ, 2018;362:k3218
3. Shah K et al. Outcome reporting bias in Cochrane systematic reviews: a cross-sectional analysis. BMJ Open, 2020;16;10:e032497.
4. Chen T et al. Comparison of clinical trial changes in primary outcome and reported intervention effect size between trial registration and publication. JAMA, 2019; 2(7):e197242
5. https://www.bmj.com/company/newsroom/poor-research-practice-suggests-tru...
I read this scoping review with great interest. In linguistics there is a term for the phenomenon that a word can have the capacity to have multiple related meanings in different contexts. This term is polysemy. In the context of defining "Evidence-based medicine". If we want to understand each other, it proved successful to define the words we use .Sackett et al. in their seminal article about" EBM what it is and what it isn't" also defined what they mean by "evidence", namely: "By best available external clinical evidence we mean clinically relevant research,often from the basic sciences of medicine, but especially from patient centred clinical research into the accuracy and precision of diagnostic tests (including the clinical examination), the power of prognostic markers, and the efficacy and safety of therapeutic, rehabilitative, and preventive regimens."
Doesn't this render it irrelevant for the context of EBM if others define differently in other contexts? A bigger problem arises when the term evidence is misunderstood or even misused within the context of EBM. This problem might be approached by better teaching of the meaning originally meant. Evidence to my knowledge (as a non-english native speaker) is a juridical term and thus different to proof or fact. Juridically only the sum of evidence is leading to a court decision. And, this decision can be wrong especially if new evidence arises.
The article by Alastair Munro and colleagues (to which this is a rapid response) is of great value and importance, as we can now see, since this article was published, that there has been considerable discussion and interrogation by the COVID inquiry (https://covid19.public-inquiry.uk/), about whether the closure of schools, ordered on 18th March 2020 by Boris Johnson's government, and put into effect two days later on 20th March, was ever necessary.
The rationale for the closure of schools was that it was imperative that the transmission rate of the COVID-19 pandemic, then rapidly spreading in the UK, was brought down at pace. Only three days later, a full, mandated, "lock down" was imposed. This was in the context of a suite of increasingly draconian measures to impose Non-Pharmaceutical Interventions (NPIs) on an increasingly fretful population - on March 12th, 16th, 20th and 23rd.
The question we pose of the authors of the current paper under discussion is a simple one. What is the actual, real world, evidence that the 'R' value fell as a result of the closure of schools? Is there any real-world, UK-based, 2020 information we can use to answer this question. This question cannot remotely be answered by reference to other countries, or, to mathematical models. The impact of school closures must also be analysed independently of all of the other contemporaneous NPIs.
If the o...
Show MoreAs French Nuclear Medicine representatives, we read with great interest the article by Le Guludec et al. entitled: Rapid access to innovative medicinal products while ensuring relevant health technology assessment: Position of the French National Authority for Health. In this interesting and important position paper from the French Independent Health Technology body (HTAb) called “Haute Autorité de Santé” (HAS), the authors state that its recommendations derive from consultations with academics. Although we understand that accessibility to innovative drugs used for Positron Emission Tomography (PET) could be considered as a very ancillary issue by the HAS board that authored the paper, these PET imaging molecules are still considered as medicinal products from a regulatory standpoint and should be evaluated as such.
Show MoreWe regret that, to our knowledge, none of the academic members of the French Nuclear Medicine Society (SFMN) board were given the opportunity to draw attention to some of the specific features of the drugs commonly used in nuclear medicine by answering the questionnaire sent to the panelists (cf supplemental material). Indeed, we would clearly have answered “Yes” to the following questions:
- Are there specific methodological issues for Health Technology Assessment you wish to bring to our attention?
- Do you identify methodological issues relative to the assessment of innovative drugs in specific therapeutic areas?
We would also have be...
Dear Prof. Franco,
I am writing to request further clarification on the paper “Likelihood ratio interpretation of the relative risk”. The “key messages” section of this paper states that the study adds the following to the literature:
⇒ It is demonstrated that the conventional interpretation of the relative risk is in conflict with Bayes’ theorem.
⇒ The interpretation of the relative risk as a likelihood ratio connecting prior (unconditional) intervention risk to outcome conditional intervention risk is required to avoid conflict with Bayes’ Theorem
I will refer to the first bullet point as “Doi’s Conjecture”. Doi’s Conjecture is also stated in the second section of the main text, where it is claimed that “the usual interpretation (33% increase in the +ve outcome under treatment) contravenes Bayes Theorem”.
No attempt is made within the text to prove Doi’s Conjecture. But perhaps more worryingly, no attempt is made to define the term “interpretation”, a term which is not defined in standard probability theory. The meaning of Doi’s Conjecture is therefore at best ambiguous. Moreover, the manuscript relies substantially on claims about how effect measures are “perceived”, another term which is defined neither in probability theory not in the manuscript.
The relative risk is defined as the risk of the outcome under treatment, divided by the risk of the outcome under the control condition; that is, as a ratio of two probabilities. Thi...
Show MoreTitle: “Claims about the main claim”
Author: Suhail A, Doi, Polychronis, Kostoulas, Paul, Glasziou
In response to the published article "Likelihood ratio interpretation of the relative risk"
Rapid response :
September 16, 2022
The problem in evidence-based medicine arises when we port relative risks derived from one study to settings with different baseline risks. For example, a baseline risk of 0.2 and treated risk of 0.4 for an event in a trial gives a RR of 2 (0.4/0.2) and the complementary cRR of 0.75 (0.6/0.8). Thus the ratio of LRs (RR/cRR) is 2/0.75 = 2.67. If applied to a baseline risk of 0.5 the predicted risk under treatment with the RR “interpretation” is 1.0 but with the ratio of LRs “interpretation” is 0.73. Here, the interpretation of the risk ratio as a likelihood ratio, using Bayes’ theorem, clearly gives different results, and solves the problem of impossible risks as clearly depicted in the manuscript and the example.
If, in our effort to highlight the need of this correct interpretation, we have used strong wording that annoyed the commentator we feel the need to express regret. We hope that the commentator could also feel similarly for his scientifically unbecoming choice of wording that culminated with “Doi’s Conjecture”.
Conflict of Interest
None declared
I would like to congratulate Dr. Abbott and her team in generating this piece of important and interesting article, which applied the methods of meta-science to the early systematic review articles and the infodemics related to COVID-19.
Indeed, the COVID-19 pandemic came quick and ferocious, starting early 2020 and lasted till recently and with new possible variants emerging, it still presents the medical community and indeed the scientific circles with challenging question. Thankfully to the selfless work of researchers, patients and frontline medical staffs, we now have some valuable means to deal with this Pandemic.
The research community was presented with a rather challenging task of designing and conducting researches to answer important questions relating to the new infectious diseases at the time of early 2020. The “new” corona virus was ravaging parts of our world without checks. So studies were conducted at pace, which unfortunately resulted in much duplicated and poor methodological studies conducted. But on the other hand, the sheer volume of studies itself may be useful as it generated evidence to inform us of what does and what does not work in terms combating the COVID-19. For example, dexamethasone (RECOVERY trial) was found to be essential for severe COVID-19 patients and the use of Hydroxychloroquine is ineffective for COVID-19.
Show MoreHaving said this, I must state that I am not in support of the generation of poor quality clinical studi...
Dear Editor,
This response is in relation to the titled article above published in June 2019. Firstly, I would like to commend the outstanding work of research done. While reading the article, I understood the correlation between the nursing field, evidence-based research, and ways in which patients benefit from current health practices. Furthermore, the research conducted a wide range of research benefits in other nursing career paths globally. It showed experts views on teaching evidenced based prospectus, evidence-based deliberations, and stakeholders’ engagement which can impact patients involved. I agree with the study conducted and how research is essential for future advancements as well as improvements in care to patients. Unfortunately, there aren’t as much published research work in The Bahamas on evidence-based practices from an expert view. Through further research this thesis can become widespread to obtain more views on this pressing matter.
We fully agree that „non-publication of trial results and selective outcome reporting…is not a phenomenon that is limited to homeopathy.”
Previous reviews in conventional medicine, such as the study by Kosa et al. in 2018, report „…substantive disagreement in reporting between publications and current clinical trial registry, which were associated with several study characteristics”.[1]
In 2019 The Lancet commented on the reporting of clinical trial data for 30 European universities that sponsor the largest number of trials governed by EU clinical trials regulation: “The report shows that 778 (83%) of 940 clinical trials sponsored by these universities due to post their results on the EU Clinical trials Register (EudraCT) had not done so”.[2]
The International Committee of Medical Journal Editors (ICMJE) announced in 2005 that “… trials that begin enrolment of patients after 1 July 2005 must register in a public trials registry at or before the onset of enrolment to be considered for publication …”.[3] EU rules took effect in 2014, which require all clinical trials registered in EudraCT to post summary results within 12 months of study completion.[2] Hence, the inclusion of studies on homeopathy published before and in 2005 by Gartlehner et al. 2022 does not seem reasonable respectively of those published before and in 2014 is debatable.
Show MoreNotwithstanding the above, precise information on sub-groups of studies was not given by Gartlehner et al. 202...
In their recent paper, Gartlehner et al [1] reached the headline conclusion that ‘effect estimates of meta-analyses of homeopathy trials might substantially overestimate the true treatment effect of homeopathic remedies’. Their conclusion is based on having re-analysed one of the systematic review papers’ data published by Mathie et al [2] by taking into account the possible impact of a trial’s registration status. Gartlehner et al analysed a sub-set of 19 trials of non-individualised homeopathic treatment, comparing 6 trials that were registered with 13 trials that were not registered. They observed a statistically significant difference between homeopathy and placebo only for the non-registered trials; however, the difference in effect sizes between registered and non-registered trials did not reach statistical significance.
In conducting their re-analysis, Gartlehner et al have failed to recognise that the meta-analysis by Mathie et al [2] was primarily based on a sensitivity analysis of trials that comprised reliable evidence (effectively, low risk of bias): the effect-size estimate collectively for those 3 trials yielded a statistically non-significant result. Those 3 trials are amongst the 6 registered trials in Gartlehner’s re-analysis, and so it is no surprise that they contributed to a non-significant pooled effect size. A majority of the other 13 trials, now defined as non-registered [1], had previously been categorised by Mathie et al as high risk of bias...
Show MoreThe new study by Gartlehner et al. (1) claims that the benefits of homeopathy may have been over-estimated due to high levels of reporting bias. However, as this problem is well-known to affect all areas of medical research, context is everything.
Although the authors state that, “non-publication of trial results and selective outcome reporting …. is not a phenomenon that is limited to homeopathy”, they failed to provide adequate context for their results by making any direct comparison to other areas of clinical research. Homeopathy is arguably out-performing conventional medicine, or, at the very least, has comparable levels of reporting bias. Using representative examples of high-impact studies on reporting bias across all medical fields, when compared with the data presented by Gartlehner et al.(1) it is clear that:
1) half of all registered clinical trials (2) in conventional medicine fail to report their results within 12 months; whereas 62% of all registered homeopathy trials reach publication, and
2) inconsistencies in reporting of primary outcome (3) occur in 43% of conventional medical studies; whilst this happens in only 25% of published homeopathy trials.
The potential impact of unregistered/unpublished results on estimates of treatment effects is well known (4), yet for homeopathy, according to Gartlehner et al.(1), the impact may be minimal, or nothing at all: “the difference in effect sizes between registered and unregistered stud...
Show MorePages