I am writing to request further clarification on the paper “Likelihood ratio interpretation of the relative risk”. The “key messages” section of this paper states that the study adds the following to the literature:
⇒ It is demonstrated that the conventional interpretation of the relative risk is in conflict with Bayes’ theorem.
⇒ The interpretation of the relative risk as a likelihood ratio connecting prior (unconditional) intervention risk to outcome conditional intervention risk is required to avoid conflict with Bayes’ Theorem
I will refer to the first bullet point as “Doi’s Conjecture”. Doi’s Conjecture is also stated in the second section of the main text, where it is claimed that “the usual interpretation (33% increase in the +ve outcome under treatment) contravenes Bayes Theorem”.
No attempt is made within the text to prove Doi’s Conjecture. But perhaps more worryingly, no attempt is made to define the term “interpretation”, a term which is not defined in standard probability theory. The meaning of Doi’s Conjecture is therefore at best ambiguous. Moreover, the manuscript relies substantially on claims about how effect measures are “perceived”, another term which is defined neither in probability theory not in the manuscript.
The relative risk is defined as the risk of the outcome under treatment, divided by the risk of the outcome under the control condition; that is, as a ratio of two probabilities. Thi...
I am writing to request further clarification on the paper “Likelihood ratio interpretation of the relative risk”. The “key messages” section of this paper states that the study adds the following to the literature:
⇒ It is demonstrated that the conventional interpretation of the relative risk is in conflict with Bayes’ theorem.
⇒ The interpretation of the relative risk as a likelihood ratio connecting prior (unconditional) intervention risk to outcome conditional intervention risk is required to avoid conflict with Bayes’ Theorem
I will refer to the first bullet point as “Doi’s Conjecture”. Doi’s Conjecture is also stated in the second section of the main text, where it is claimed that “the usual interpretation (33% increase in the +ve outcome under treatment) contravenes Bayes Theorem”.
No attempt is made within the text to prove Doi’s Conjecture. But perhaps more worryingly, no attempt is made to define the term “interpretation”, a term which is not defined in standard probability theory. The meaning of Doi’s Conjecture is therefore at best ambiguous. Moreover, the manuscript relies substantially on claims about how effect measures are “perceived”, another term which is defined neither in probability theory not in the manuscript.
The relative risk is defined as the risk of the outcome under treatment, divided by the risk of the outcome under the control condition; that is, as a ratio of two probabilities. This manuscript appears to claim that “interpreting” the relative risk in the manner that it is defined, is inconsistent with Bayes’ Theorem, a fundamental result in probability theory. If this is true, probability theory is in deep conceptual trouble.
There are multiple correct (and mathematically equivalent) ways to represent effects within a study, to predict risk under treatment and to determine the posterior probability. This manuscript provides no coherent reason for one valid approach to take precedence over another valid approach.
Title: “Claims about the main claim”
Author: Suhail A, Doi, Polychronis, Kostoulas, Paul, Glasziou
In response to the published article "Likelihood ratio interpretation of the relative risk"
Rapid response :
September 16, 2022
The problem in evidence-based medicine arises when we port relative risks derived from one study to settings with different baseline risks. For example, a baseline risk of 0.2 and treated risk of 0.4 for an event in a trial gives a RR of 2 (0.4/0.2) and the complementary cRR of 0.75 (0.6/0.8). Thus the ratio of LRs (RR/cRR) is 2/0.75 = 2.67. If applied to a baseline risk of 0.5 the predicted risk under treatment with the RR “interpretation” is 1.0 but with the ratio of LRs “interpretation” is 0.73. Here, the interpretation of the risk ratio as a likelihood ratio, using Bayes’ theorem, clearly gives different results, and solves the problem of impossible risks as clearly depicted in the manuscript and the example.
If, in our effort to highlight the need of this correct interpretation, we have used strong wording that annoyed the commentator we feel the need to express regret. We hope that the commentator could also feel similarly for his scientifically unbecoming choice of wording that culminated with “Doi’s Conjecture”.
Conflict of Interest
None declared
I would like to congratulate Dr. Abbott and her team in generating this piece of important and interesting article, which applied the methods of meta-science to the early systematic review articles and the infodemics related to COVID-19.
Indeed, the COVID-19 pandemic came quick and ferocious, starting early 2020 and lasted till recently and with new possible variants emerging, it still presents the medical community and indeed the scientific circles with challenging question. Thankfully to the selfless work of researchers, patients and frontline medical staffs, we now have some valuable means to deal with this Pandemic.
The research community was presented with a rather challenging task of designing and conducting researches to answer important questions relating to the new infectious diseases at the time of early 2020. The “new” corona virus was ravaging parts of our world without checks. So studies were conducted at pace, which unfortunately resulted in much duplicated and poor methodological studies conducted. But on the other hand, the sheer volume of studies itself may be useful as it generated evidence to inform us of what does and what does not work in terms combating the COVID-19. For example, dexamethasone (RECOVERY trial) was found to be essential for severe COVID-19 patients and the use of Hydroxychloroquine is ineffective for COVID-19.
Having said this, I must state that I am not in support of the generation of poor quality clinical studi...
I would like to congratulate Dr. Abbott and her team in generating this piece of important and interesting article, which applied the methods of meta-science to the early systematic review articles and the infodemics related to COVID-19.
Indeed, the COVID-19 pandemic came quick and ferocious, starting early 2020 and lasted till recently and with new possible variants emerging, it still presents the medical community and indeed the scientific circles with challenging question. Thankfully to the selfless work of researchers, patients and frontline medical staffs, we now have some valuable means to deal with this Pandemic.
The research community was presented with a rather challenging task of designing and conducting researches to answer important questions relating to the new infectious diseases at the time of early 2020. The “new” corona virus was ravaging parts of our world without checks. So studies were conducted at pace, which unfortunately resulted in much duplicated and poor methodological studies conducted. But on the other hand, the sheer volume of studies itself may be useful as it generated evidence to inform us of what does and what does not work in terms combating the COVID-19. For example, dexamethasone (RECOVERY trial) was found to be essential for severe COVID-19 patients and the use of Hydroxychloroquine is ineffective for COVID-19.
Having said this, I must state that I am not in support of the generation of poor quality clinical studies, but what I am saying is that different clinical / public health circumstances may call for different overall research strategies, sometimes we need quick answers to important clinical questions, then unfortunately, quality is often sacrificed/overlooked for the sake of the generation of "rapid" evidence, and the early phase of the COVID-19 pandemic may fall into this category. A relative lake of research capacity and resources may also contributed to this unfortunate phenomenon as we just do not have the means to generate massive amount of high quality researches in a short period of time.
It is my view that medical research can perhaps be broadly divided into four categories:
1). Very Important and urgent: such as the COVID-19 or any past or future public health emergencies
2). Important but not urgent: hypertension, diabetes etc.
3). Not as important but urgent: diagnosis or care of rare genetic diseases etc.
4). Least important and not urgent: patient flow, drug compliance issues etc.
I would concur with Dr. Abbott and her team and both the authors and the editors of the scientific community share the "duty" of ensuring and also improve the quality of systematic reviews. As an educator, we are equally responsible for providing future clinicians and scholars with the knowledge and skills to conduct high quality studies. Furthermore, meta-researches are as important to provide us with a scoping view of the evidences of review articles and to remind us of the need of research vigilance. Lastly it is perhaps also the responsibility of the policy makers and governments to ensure the persistent and appropriate resources are channeled into the research community, especially in times of future public health emergencies.
Dear Editor,
This response is in relation to the titled article above published in June 2019. Firstly, I would like to commend the outstanding work of research done. While reading the article, I understood the correlation between the nursing field, evidence-based research, and ways in which patients benefit from current health practices. Furthermore, the research conducted a wide range of research benefits in other nursing career paths globally. It showed experts views on teaching evidenced based prospectus, evidence-based deliberations, and stakeholders’ engagement which can impact patients involved. I agree with the study conducted and how research is essential for future advancements as well as improvements in care to patients. Unfortunately, there aren’t as much published research work in The Bahamas on evidence-based practices from an expert view. Through further research this thesis can become widespread to obtain more views on this pressing matter.
We fully agree that „non-publication of trial results and selective outcome reporting…is not a phenomenon that is limited to homeopathy.”
Previous reviews in conventional medicine, such as the study by Kosa et al. in 2018, report „…substantive disagreement in reporting between publications and current clinical trial registry, which were associated with several study characteristics”.[1]
In 2019 The Lancet commented on the reporting of clinical trial data for 30 European universities that sponsor the largest number of trials governed by EU clinical trials regulation: “The report shows that 778 (83%) of 940 clinical trials sponsored by these universities due to post their results on the EU Clinical trials Register (EudraCT) had not done so”.[2]
The International Committee of Medical Journal Editors (ICMJE) announced in 2005 that “… trials that begin enrolment of patients after 1 July 2005 must register in a public trials registry at or before the onset of enrolment to be considered for publication …”.[3] EU rules took effect in 2014, which require all clinical trials registered in EudraCT to post summary results within 12 months of study completion.[2] Hence, the inclusion of studies on homeopathy published before and in 2005 by Gartlehner et al. 2022 does not seem reasonable respectively of those published before and in 2014 is debatable.
Notwithstanding the above, precise information on sub-groups of studies was not given by Gartlehner et al. 202...
We fully agree that „non-publication of trial results and selective outcome reporting…is not a phenomenon that is limited to homeopathy.”
Previous reviews in conventional medicine, such as the study by Kosa et al. in 2018, report „…substantive disagreement in reporting between publications and current clinical trial registry, which were associated with several study characteristics”.[1]
In 2019 The Lancet commented on the reporting of clinical trial data for 30 European universities that sponsor the largest number of trials governed by EU clinical trials regulation: “The report shows that 778 (83%) of 940 clinical trials sponsored by these universities due to post their results on the EU Clinical trials Register (EudraCT) had not done so”.[2]
The International Committee of Medical Journal Editors (ICMJE) announced in 2005 that “… trials that begin enrolment of patients after 1 July 2005 must register in a public trials registry at or before the onset of enrolment to be considered for publication …”.[3] EU rules took effect in 2014, which require all clinical trials registered in EudraCT to post summary results within 12 months of study completion.[2] Hence, the inclusion of studies on homeopathy published before and in 2005 by Gartlehner et al. 2022 does not seem reasonable respectively of those published before and in 2014 is debatable.
Notwithstanding the above, precise information on sub-groups of studies was not given by Gartlehner et al. 2022. Hence, the conclusion, “This likely affects the validity of the body of evidence of homeopathic literature and may overestimate the true treatment effect of homeopathic remedies”, based on those studies with modified or switched primary outcome measure or the point of time of assessment, is not adequately justified.
Previous studies published in the BMJ which looked at reporting bias in all medical fields showed that (a) half of all registered clinical trials in conventional medicine fail to report their results within a 12 month period, whereas according to Gartlehner et al. 2022 62% of all registered homeopathy trials reach publication,[4] and that (b) inconsistencies in reporting of primary outcome occur in 43% of conventional medical studies, whilst according to Gartlehner et al. 2022 this occurs in only 25% of published homeopathy trials.[5]
Hence, the most interesting finding is that homeopathy is out-performing conventional medicine in this respect, with lower levels of reporting bias.
References:
1. Kosa SD, Mbuagbaw L, Debono VB, et al. Agreement in reporting between trial publications and current clinical trial registry in high impact journals: a methological review. Contemp Clin Trials 2018;65:144-150. https://pubmed.ncbi.nlm.nih.gov/29287666/
2. Clinical trial registry reporting: a transparent solution needed. Lancet Oncol 2019; 20:741. https://www.thelancet.com/action/showPdf?pii=S1470-2045%2819%2930350-X
3. Kamran A. Compulsory registration of clinical trials. BMJ 2004;329:637. https://www.bmj.com/content/329/7467/637
4. Goldacre B, DeVito NJ, Heneghan C, et al. Compliance with requirement to report results on the EU Clinical Trials Register: cohort study and web resource. BMJ 2018;362:k3218. https://pubmed.ncbi.nlm.nih.gov/30209058/
5. Shah K, Egan G, Huan LN, et al. Outcome reporting bias in Cochrane systematic reviews: a cross-sectional analysis. BMJ Open 2020;10(3):e032497. https://pubmed.ncbi.nlm.nih.gov/32184303/
Michael Frass, M.D.
Em. Professor of Medicine
Medical University of Vienna,
Vienna, Austria
Menachem Oberbaum, MD, FFHom (Lond.)
The Center for Integrative Complementary Medicine
Shaare Zedek Medical Center
Jerusalem, Israel
In their recent paper, Gartlehner et al [1] reached the headline conclusion that ‘effect estimates of meta-analyses of homeopathy trials might substantially overestimate the true treatment effect of homeopathic remedies’. Their conclusion is based on having re-analysed one of the systematic review papers’ data published by Mathie et al [2] by taking into account the possible impact of a trial’s registration status. Gartlehner et al analysed a sub-set of 19 trials of non-individualised homeopathic treatment, comparing 6 trials that were registered with 13 trials that were not registered. They observed a statistically significant difference between homeopathy and placebo only for the non-registered trials; however, the difference in effect sizes between registered and non-registered trials did not reach statistical significance.
In conducting their re-analysis, Gartlehner et al have failed to recognise that the meta-analysis by Mathie et al [2] was primarily based on a sensitivity analysis of trials that comprised reliable evidence (effectively, low risk of bias): the effect-size estimate collectively for those 3 trials yielded a statistically non-significant result. Those 3 trials are amongst the 6 registered trials in Gartlehner’s re-analysis, and so it is no surprise that they contributed to a non-significant pooled effect size. A majority of the other 13 trials, now defined as non-registered [1], had previously been categorised by Mathie et al as high risk of bias...
In their recent paper, Gartlehner et al [1] reached the headline conclusion that ‘effect estimates of meta-analyses of homeopathy trials might substantially overestimate the true treatment effect of homeopathic remedies’. Their conclusion is based on having re-analysed one of the systematic review papers’ data published by Mathie et al [2] by taking into account the possible impact of a trial’s registration status. Gartlehner et al analysed a sub-set of 19 trials of non-individualised homeopathic treatment, comparing 6 trials that were registered with 13 trials that were not registered. They observed a statistically significant difference between homeopathy and placebo only for the non-registered trials; however, the difference in effect sizes between registered and non-registered trials did not reach statistical significance.
In conducting their re-analysis, Gartlehner et al have failed to recognise that the meta-analysis by Mathie et al [2] was primarily based on a sensitivity analysis of trials that comprised reliable evidence (effectively, low risk of bias): the effect-size estimate collectively for those 3 trials yielded a statistically non-significant result. Those 3 trials are amongst the 6 registered trials in Gartlehner’s re-analysis, and so it is no surprise that they contributed to a non-significant pooled effect size. A majority of the other 13 trials, now defined as non-registered [1], had previously been categorised by Mathie et al as high risk of bias [2]. If non-registration contributes to higher risk of bias, producing an overestimated effect size, then Gartlehner et al have discovered nothing new, and it is misleading that they have portrayed the Mathie et al paper [2] as one that could ‘substantially overestimate the true treatment effect of homeopathic remedies’ because it did not reflect the registration status of its analysed trials.
Moreover, the findings of another systematic review by Mathie et al (on clinical trials of individualised homeopathic treatment [3]) also remain unaffected by the Gartlehner et al paper. For the three trials with reliable evidence in that review, sensitivity analysis revealed a small but statistically significant effect size favouring homeopathy [3]. Gartlehner et al were unable to re-analyse these data effectively because of a paucity of registered trials – an observation that is also not surprising, since only 3 of 22 of those homeopathy trials were published after 2007, a year in which fewer than 40% of published trials in conventional medicine were registered [4]. Thus the headline findings from the two systematic reviews and meta-analyses published by Mathie et al [2, 3] stand intact.
References
1. Gartlehner G, Emprechtinger R, Hackl M, et al. Assessing the magnitude of reporting bias in trials of
homeopathy: a cross-sectional study and meta-analysis BMJ Evidence-Based Medicine. Epub ahead of print: 15 March 2022. doi:10.1136/bmjebm-2021-111846.
2. Mathie RT, Ramparsad N, Legg LA, et al. Randomised, double-blind, placebo-controlled trials of non-individualised homeopathic treatment: systematic review and meta-analysis. Syst Rev 2017;6:63.
3. Mathie RT, Lloyd SM, Legg LA, et al. Randomised placebo-controlled trials of individualised homeopathic treatment: systematic review and meta-analysis. Syst Rev 2014;3:142.
4. Wong EKC, Lachance CC, Page J, et al. Selective reporting bias in randomised controlled trials from two network meta-analyses: comparison of clinical trial registrations and their respective publications. BMJ Open 2019;9:e031138.
The new study by Gartlehner et al. (1) claims that the benefits of homeopathy may have been over-estimated due to high levels of reporting bias. However, as this problem is well-known to affect all areas of medical research, context is everything.
Although the authors state that, “non-publication of trial results and selective outcome reporting …. is not a phenomenon that is limited to homeopathy”, they failed to provide adequate context for their results by making any direct comparison to other areas of clinical research. Homeopathy is arguably out-performing conventional medicine, or, at the very least, has comparable levels of reporting bias. Using representative examples of high-impact studies on reporting bias across all medical fields, when compared with the data presented by Gartlehner et al.(1) it is clear that:
1) half of all registered clinical trials (2) in conventional medicine fail to report their results within 12 months; whereas 62% of all registered homeopathy trials reach publication, and
2) inconsistencies in reporting of primary outcome (3) occur in 43% of conventional medical studies; whilst this happens in only 25% of published homeopathy trials.
The potential impact of unregistered/unpublished results on estimates of treatment effects is well known (4), yet for homeopathy, according to Gartlehner et al.(1), the impact may be minimal, or nothing at all: “the difference in effect sizes between registered and unregistered stud...
The new study by Gartlehner et al. (1) claims that the benefits of homeopathy may have been over-estimated due to high levels of reporting bias. However, as this problem is well-known to affect all areas of medical research, context is everything.
Although the authors state that, “non-publication of trial results and selective outcome reporting …. is not a phenomenon that is limited to homeopathy”, they failed to provide adequate context for their results by making any direct comparison to other areas of clinical research. Homeopathy is arguably out-performing conventional medicine, or, at the very least, has comparable levels of reporting bias. Using representative examples of high-impact studies on reporting bias across all medical fields, when compared with the data presented by Gartlehner et al.(1) it is clear that:
1) half of all registered clinical trials (2) in conventional medicine fail to report their results within 12 months; whereas 62% of all registered homeopathy trials reach publication, and
2) inconsistencies in reporting of primary outcome (3) occur in 43% of conventional medical studies; whilst this happens in only 25% of published homeopathy trials.
The potential impact of unregistered/unpublished results on estimates of treatment effects is well known (4), yet for homeopathy, according to Gartlehner et al.(1), the impact may be minimal, or nothing at all: “the difference in effect sizes between registered and unregistered studies did not reach statistical significance”. Therefore, it is surprising that the authors claim that Dr Mathie’s “landmark meta-analyses”, used as the starting point for their analysis, “might substantially overestimate the true treatment effect of homeopathic remedies and need to be interpreted cautiously”. A thorough examination of their study reveals that their data do not support this claim.
While attempts have been made to use this new study to undermine the evidence base in homeopathy, claiming “poor research practice” (5), such claims are entirely unfounded. Reporting bias occurs in all areas of medical research, so, unsurprisingly, it occurs in homeopathy research too. Contrary to these authors’ claims, the clinical evidence base in homeopathy does not need more “cautious interpretation” than any other scientific evidence.
1. Gartlehner G et al. Assessing the magnitude of reporting bias in trials of homeopathy: a cross-sectional study and meta-analysis. BMJ Evidence-Based Medicine, 2022; eFirst
2. Goldacre B et al. Compliance with requirement to report results on the EU Clinical Trials Register: cohort study and web resource. BMJ, 2018;362:k3218
3. Shah K et al. Outcome reporting bias in Cochrane systematic reviews: a cross-sectional analysis. BMJ Open, 2020;16;10:e032497.
4. Chen T et al. Comparison of clinical trial changes in primary outcome and reported intervention effect size between trial registration and publication. JAMA, 2019; 2(7):e197242
5. https://www.bmj.com/company/newsroom/poor-research-practice-suggests-tru...
Gartlehner et al (1) concluded that the effects of homeopathic clinical trials may be overestimated due to publication bias. Such conclusions are inaccurate based on their own statement and their evaluation of the data they investigated. The authors asserted, “the difference in effect sizes between registered and unregistered studies did not reach statistical significance.” Despite this clear statement of what their data showed, the researchers instead came to a different conclusion that sought to question the integrity of research results with homeopathy.
To their credit, these authors acknowledge that the problem of “non-publication of trial results and selective outcome reporting …. is not a phenomenon that is limited to homeopathy.” And yet, they purposefully chose to not reference any literature that evaluated this problem in publication bias from clinical trails testing conventional medicine. A simple review of the literature would find that conventional medical trials have at least the same rate of publication bias as those reported upon that tested homeopathic medicines (2), to reviews of research that showed a much higher level of publication bias when reporting on conventional medical treatments (3).
The fact is that several media (4)(5) that have reported on this study have come to the mistaken conclusion that the results of homeopathic clinical trials are not to be trusted, and this biased conclusion stems from the Gartlehner articl...
Gartlehner et al (1) concluded that the effects of homeopathic clinical trials may be overestimated due to publication bias. Such conclusions are inaccurate based on their own statement and their evaluation of the data they investigated. The authors asserted, “the difference in effect sizes between registered and unregistered studies did not reach statistical significance.” Despite this clear statement of what their data showed, the researchers instead came to a different conclusion that sought to question the integrity of research results with homeopathy.
To their credit, these authors acknowledge that the problem of “non-publication of trial results and selective outcome reporting …. is not a phenomenon that is limited to homeopathy.” And yet, they purposefully chose to not reference any literature that evaluated this problem in publication bias from clinical trails testing conventional medicine. A simple review of the literature would find that conventional medical trials have at least the same rate of publication bias as those reported upon that tested homeopathic medicines (2), to reviews of research that showed a much higher level of publication bias when reporting on conventional medical treatments (3).
The fact is that several media (4)(5) that have reported on this study have come to the mistaken conclusion that the results of homeopathic clinical trials are not to be trusted, and this biased conclusion stems from the Gartlehner article that made conclusions that didn’t arise from their data.
It seems that these authors themselves are showing evidence of their own reporting bias and of their own avoidance of reporting on evidence that shows that conventional medical research may be as or more guilty of this problem. By maintaining conclusions that are not consistent with their finding that the effect size between registered and non-registered studies did not achieve statistical significance, and then deciding to avoid referencing how common this problem is in conventional medical research, these researchers seem to be showing “bad faith,” in other words, a deliberate intent to mislead.
(1) Gartlehner G, Emprechtinger R, Hackl M, et al. Assessing the magnitude of reporting bias in trials of
homeopathy: a cross-sectional study and meta-analysis BMJ Evidence-Based Medicine. Epub ahead of print: 15 March 2022. doi:10.1136/bmjebm-2021-111846.
(2) Kicinski M, Springate DA, Kontopantelis E. Publication bias in meta-analyses from the Cochrane Database of Systematic Reviews. Stat Med. 2015 Sep 10;34(20):2781-93. doi: 10.1002/sim.6525. Epub 2015 May 18. PMID: 25988604. https://pubmed.ncbi.nlm.nih.gov/25988604/
(3) Decullier E, Lhéritier V, Chapuis F. Fate of biomedical research protocols and publication bias in France: retrospective cohort study. BMJ. 2005;331(7507):19. doi:10.1136/bmj.38488.385995.8F https://www.ncbi.nlm.nih.gov/pmc/articles/PMC558532/
The article by Gartlehner et al. [1] is interesting because it allows the homeopathic community to elaborate on potential publication bias in clinical trials of homeopathy. There are, however, several questionable elements: in the article, and in the announcement made on the BMJ web, it is concluded that there was a high proportion of trials not preregistered, but at the same time Gartlehner acknowledges in the press that over time there has been a substantial improvement in the preregistration of trials [2]; it is mentioned that homeopaths must improve, but at the same time it is implied that "homeopathy cannot work".
On the second point, it is worth mentioning that in the article Gartlehner et al cite two trials, one by Grimes [3] and the other by Grams [4]. These essays are based on a biased selection of literature and have elementary errors. For example, Grimes says that Jacques Benveniste's famous study was published in "1987" and that Madaleine Ennis' work was negative when in fact it was positive [5]. Grimes bases his conclusions on theoretical claims (a simple calculation of Avogadro's constant) and not on experimental studies that at the time were available (e.g. [6]). Grams, on the other hand, only cites some old articles from 1992 and 1993 without mentioning more recent studies (e.g. [7]).
References:
1. Gartlehner G, Emprechtinger E, Hackl M, Gartlehner J, Nonninger J, et al. (2022). Assessing the magnitude...
The article by Gartlehner et al. [1] is interesting because it allows the homeopathic community to elaborate on potential publication bias in clinical trials of homeopathy. There are, however, several questionable elements: in the article, and in the announcement made on the BMJ web, it is concluded that there was a high proportion of trials not preregistered, but at the same time Gartlehner acknowledges in the press that over time there has been a substantial improvement in the preregistration of trials [2]; it is mentioned that homeopaths must improve, but at the same time it is implied that "homeopathy cannot work".
On the second point, it is worth mentioning that in the article Gartlehner et al cite two trials, one by Grimes [3] and the other by Grams [4]. These essays are based on a biased selection of literature and have elementary errors. For example, Grimes says that Jacques Benveniste's famous study was published in "1987" and that Madaleine Ennis' work was negative when in fact it was positive [5]. Grimes bases his conclusions on theoretical claims (a simple calculation of Avogadro's constant) and not on experimental studies that at the time were available (e.g. [6]). Grams, on the other hand, only cites some old articles from 1992 and 1993 without mentioning more recent studies (e.g. [7]).
References:
1. Gartlehner G, Emprechtinger E, Hackl M, Gartlehner J, Nonninger J, et al. (2022). Assessing the magnitude of reporting bias in trials of homeopathy: a cross-sectional study and meta-analysis. BMJ Evidence-Based Medicine; 27(1): 1-20.
2. Doheny K. (2022). Homeopathy: do ‘cherry-picked?’ studies exaggerate benefits? WebMD. Available in: https://www.webmd.com/balance/news/20220401/homeopathy-benefits-may-be-e...
3. Grimes R. (2012). Proposed mechanisms for homeopathy are physically impossible. Focus on Alternative and Complementary Therapies; 17(3): 149-155.
4. Grams N. (2019). Homeopathy – where is the science? A current inventory on a pre-scientific artifact. EMBO Reports; 20(3): 1-50.
5. Belon P, Cumps J, Ennis M, Mannaioni P, Sainte J, et al. (1999). Inhibition of human basophil degranulation by successive histamine dilutions: results of a European multicentre trial. Inflammation Research; 48 (S1): 17–18.
6. Demangeat L. (2009). NMR water proton relaxation in unheated and heated ultrahigh aqueous dilutions of histamine: evidence for an air-dependent supramolecular organization of water. Journal of Molecular Liquids; 144(1): 32-39.
7. Laudy S & Belon P. (2009). Inhibition of basophil activation by histamine: a sensitive and reproducible model for the study of the biological activity of high dilutions. Homeopathy; 98(4): 186-197.
Conflict of Interest: None. I am not a homeopath, but I have researched the subject of homeopathy from a social and historical point of view.
To our great sadness on Wednesday, April 13th 2022, Ingeborg Griffioen, author of “Innovating in healthcare: perspective from a dual role” passed away, at the age of 50. She departed with acceptance of the inevitable and in connection to those she loved and who loved her.
Ingeborg was founder and owner of Panton design studio, specialized in healthcare. In 2016 she started a PhD research on the use of service design to support shared decision making. Less than one year later, her husband was diagnosed with pancreatic cancer. Ingeborg incorporated their experiences with his care trajectory in her research, which led to the development of MetroMapping, a method to support shared decision making (www.metromapping.org/en). During this development process, Ingeborg herself was diagnosed with breast cancer.
It was one of Ingeborg’s dreams that MetroMapping be further developed and implemented at a large scale. Even during her chemotherapy she contributed to the 4D PICTURE project proposal, which focuses on adapting, evaluating and implementing MetroMapping in hospitals throughout Europe. Early 2022, we received the news that Ingeborg’s dream will come true, as 4D PICTURE was selected for funding by Horizon Europe. Ingeborg wrote a beautiful testimonial for 4D PICTURE:
“As a designer I have worked for 25 years in healthcare settings and as a researcher I studied treatment decision-making. I know the importance of...
To our great sadness on Wednesday, April 13th 2022, Ingeborg Griffioen, author of “Innovating in healthcare: perspective from a dual role” passed away, at the age of 50. She departed with acceptance of the inevitable and in connection to those she loved and who loved her.
Ingeborg was founder and owner of Panton design studio, specialized in healthcare. In 2016 she started a PhD research on the use of service design to support shared decision making. Less than one year later, her husband was diagnosed with pancreatic cancer. Ingeborg incorporated their experiences with his care trajectory in her research, which led to the development of MetroMapping, a method to support shared decision making (www.metromapping.org/en). During this development process, Ingeborg herself was diagnosed with breast cancer.
It was one of Ingeborg’s dreams that MetroMapping be further developed and implemented at a large scale. Even during her chemotherapy she contributed to the 4D PICTURE project proposal, which focuses on adapting, evaluating and implementing MetroMapping in hospitals throughout Europe. Early 2022, we received the news that Ingeborg’s dream will come true, as 4D PICTURE was selected for funding by Horizon Europe. Ingeborg wrote a beautiful testimonial for 4D PICTURE:
“As a designer I have worked for 25 years in healthcare settings and as a researcher I studied treatment decision-making. I know the importance of thinking carefully about treatments goals and weighing benefits and risks. I thought I knew my way around in healthcare…
Then, in 2016, my husband was diagnosed with pancreatic cancer, and in early 2021, I turned out to have Triple Negative breast cancer. Twice I experienced a burdensome trajectory. Twice, there were clinicians intended to involve us in decision-making processes. However, twice, I often felt overwhelmed and unable to navigate the many treatment steps and decisions. Information was confusing and roles of caregivers were often unclear.
The problem turned out to be the unresponsive healthcare system, a system in which people and their needs get lost.
My experiences, as patient, as partner, as designer and researcher, inspired me to lay the foundation for the MetroMapping method. A first step to redesign entire care paths, together with patients, families, and clinicians.
My dream is to improve healthcare systems all around Europe, with patients at the heart of all care. This requires thinking big. I am so happy to be involved in a dedicated team of researchers, designers, clinicians and patient advocates. A team that knows how to use Big Data and that has drafted a solid plan, targeting at a web-based, user-friendly, open-source method and manual of MetroMapping, that includes support for decision making, and is freely accessible for all European hospitals and (future) patients. I'm in!”
In her last three months of life, which she called her “bonus time”, she set up a Foundation to support work on MetroMapping. Throughout her disease process she shared her ideas, experiences, and dreams in her blog (https://allthewaywithingebee.blog/), using striking metaphors and imagery.
Ingeborg initiated and inspired. She taught us so much and will continue to inspire all: designers, clinicians, quality of care staff, patients, and all others dedicated to improving the care trajectories and experiences of patients and their caregivers.
In loving memory,
Anne Stiggelbout, Marleen Kunneman, Arwen Pieterse. Medical Decision Making, Leiden University Medical Center
Dirk Snelders, Marijke Melles, Ena Voûte. Faculty of Industrial Design Engineering, Delft University of Technology
Judith Rietjens, Ida Korfage. Department of Public Health, Erasmus MC, University Medical Center Rotterdam
Jasper Brands, Mario de Zeeuw. Panton Design Studio, Deventer
Dear Prof. Franco,
I am writing to request further clarification on the paper “Likelihood ratio interpretation of the relative risk”. The “key messages” section of this paper states that the study adds the following to the literature:
⇒ It is demonstrated that the conventional interpretation of the relative risk is in conflict with Bayes’ theorem.
⇒ The interpretation of the relative risk as a likelihood ratio connecting prior (unconditional) intervention risk to outcome conditional intervention risk is required to avoid conflict with Bayes’ Theorem
I will refer to the first bullet point as “Doi’s Conjecture”. Doi’s Conjecture is also stated in the second section of the main text, where it is claimed that “the usual interpretation (33% increase in the +ve outcome under treatment) contravenes Bayes Theorem”.
No attempt is made within the text to prove Doi’s Conjecture. But perhaps more worryingly, no attempt is made to define the term “interpretation”, a term which is not defined in standard probability theory. The meaning of Doi’s Conjecture is therefore at best ambiguous. Moreover, the manuscript relies substantially on claims about how effect measures are “perceived”, another term which is defined neither in probability theory not in the manuscript.
The relative risk is defined as the risk of the outcome under treatment, divided by the risk of the outcome under the control condition; that is, as a ratio of two probabilities. Thi...
Show MoreTitle: “Claims about the main claim”
Author: Suhail A, Doi, Polychronis, Kostoulas, Paul, Glasziou
In response to the published article "Likelihood ratio interpretation of the relative risk"
Rapid response :
September 16, 2022
The problem in evidence-based medicine arises when we port relative risks derived from one study to settings with different baseline risks. For example, a baseline risk of 0.2 and treated risk of 0.4 for an event in a trial gives a RR of 2 (0.4/0.2) and the complementary cRR of 0.75 (0.6/0.8). Thus the ratio of LRs (RR/cRR) is 2/0.75 = 2.67. If applied to a baseline risk of 0.5 the predicted risk under treatment with the RR “interpretation” is 1.0 but with the ratio of LRs “interpretation” is 0.73. Here, the interpretation of the risk ratio as a likelihood ratio, using Bayes’ theorem, clearly gives different results, and solves the problem of impossible risks as clearly depicted in the manuscript and the example.
If, in our effort to highlight the need of this correct interpretation, we have used strong wording that annoyed the commentator we feel the need to express regret. We hope that the commentator could also feel similarly for his scientifically unbecoming choice of wording that culminated with “Doi’s Conjecture”.
Conflict of Interest
None declared
I would like to congratulate Dr. Abbott and her team in generating this piece of important and interesting article, which applied the methods of meta-science to the early systematic review articles and the infodemics related to COVID-19.
Indeed, the COVID-19 pandemic came quick and ferocious, starting early 2020 and lasted till recently and with new possible variants emerging, it still presents the medical community and indeed the scientific circles with challenging question. Thankfully to the selfless work of researchers, patients and frontline medical staffs, we now have some valuable means to deal with this Pandemic.
The research community was presented with a rather challenging task of designing and conducting researches to answer important questions relating to the new infectious diseases at the time of early 2020. The “new” corona virus was ravaging parts of our world without checks. So studies were conducted at pace, which unfortunately resulted in much duplicated and poor methodological studies conducted. But on the other hand, the sheer volume of studies itself may be useful as it generated evidence to inform us of what does and what does not work in terms combating the COVID-19. For example, dexamethasone (RECOVERY trial) was found to be essential for severe COVID-19 patients and the use of Hydroxychloroquine is ineffective for COVID-19.
Show MoreHaving said this, I must state that I am not in support of the generation of poor quality clinical studi...
Dear Editor,
This response is in relation to the titled article above published in June 2019. Firstly, I would like to commend the outstanding work of research done. While reading the article, I understood the correlation between the nursing field, evidence-based research, and ways in which patients benefit from current health practices. Furthermore, the research conducted a wide range of research benefits in other nursing career paths globally. It showed experts views on teaching evidenced based prospectus, evidence-based deliberations, and stakeholders’ engagement which can impact patients involved. I agree with the study conducted and how research is essential for future advancements as well as improvements in care to patients. Unfortunately, there aren’t as much published research work in The Bahamas on evidence-based practices from an expert view. Through further research this thesis can become widespread to obtain more views on this pressing matter.
We fully agree that „non-publication of trial results and selective outcome reporting…is not a phenomenon that is limited to homeopathy.”
Previous reviews in conventional medicine, such as the study by Kosa et al. in 2018, report „…substantive disagreement in reporting between publications and current clinical trial registry, which were associated with several study characteristics”.[1]
In 2019 The Lancet commented on the reporting of clinical trial data for 30 European universities that sponsor the largest number of trials governed by EU clinical trials regulation: “The report shows that 778 (83%) of 940 clinical trials sponsored by these universities due to post their results on the EU Clinical trials Register (EudraCT) had not done so”.[2]
The International Committee of Medical Journal Editors (ICMJE) announced in 2005 that “… trials that begin enrolment of patients after 1 July 2005 must register in a public trials registry at or before the onset of enrolment to be considered for publication …”.[3] EU rules took effect in 2014, which require all clinical trials registered in EudraCT to post summary results within 12 months of study completion.[2] Hence, the inclusion of studies on homeopathy published before and in 2005 by Gartlehner et al. 2022 does not seem reasonable respectively of those published before and in 2014 is debatable.
Show MoreNotwithstanding the above, precise information on sub-groups of studies was not given by Gartlehner et al. 202...
In their recent paper, Gartlehner et al [1] reached the headline conclusion that ‘effect estimates of meta-analyses of homeopathy trials might substantially overestimate the true treatment effect of homeopathic remedies’. Their conclusion is based on having re-analysed one of the systematic review papers’ data published by Mathie et al [2] by taking into account the possible impact of a trial’s registration status. Gartlehner et al analysed a sub-set of 19 trials of non-individualised homeopathic treatment, comparing 6 trials that were registered with 13 trials that were not registered. They observed a statistically significant difference between homeopathy and placebo only for the non-registered trials; however, the difference in effect sizes between registered and non-registered trials did not reach statistical significance.
In conducting their re-analysis, Gartlehner et al have failed to recognise that the meta-analysis by Mathie et al [2] was primarily based on a sensitivity analysis of trials that comprised reliable evidence (effectively, low risk of bias): the effect-size estimate collectively for those 3 trials yielded a statistically non-significant result. Those 3 trials are amongst the 6 registered trials in Gartlehner’s re-analysis, and so it is no surprise that they contributed to a non-significant pooled effect size. A majority of the other 13 trials, now defined as non-registered [1], had previously been categorised by Mathie et al as high risk of bias...
Show MoreThe new study by Gartlehner et al. (1) claims that the benefits of homeopathy may have been over-estimated due to high levels of reporting bias. However, as this problem is well-known to affect all areas of medical research, context is everything.
Although the authors state that, “non-publication of trial results and selective outcome reporting …. is not a phenomenon that is limited to homeopathy”, they failed to provide adequate context for their results by making any direct comparison to other areas of clinical research. Homeopathy is arguably out-performing conventional medicine, or, at the very least, has comparable levels of reporting bias. Using representative examples of high-impact studies on reporting bias across all medical fields, when compared with the data presented by Gartlehner et al.(1) it is clear that:
1) half of all registered clinical trials (2) in conventional medicine fail to report their results within 12 months; whereas 62% of all registered homeopathy trials reach publication, and
2) inconsistencies in reporting of primary outcome (3) occur in 43% of conventional medical studies; whilst this happens in only 25% of published homeopathy trials.
The potential impact of unregistered/unpublished results on estimates of treatment effects is well known (4), yet for homeopathy, according to Gartlehner et al.(1), the impact may be minimal, or nothing at all: “the difference in effect sizes between registered and unregistered stud...
Show MoreGartlehner et al (1) concluded that the effects of homeopathic clinical trials may be overestimated due to publication bias. Such conclusions are inaccurate based on their own statement and their evaluation of the data they investigated. The authors asserted, “the difference in effect sizes between registered and unregistered studies did not reach statistical significance.” Despite this clear statement of what their data showed, the researchers instead came to a different conclusion that sought to question the integrity of research results with homeopathy.
To their credit, these authors acknowledge that the problem of “non-publication of trial results and selective outcome reporting …. is not a phenomenon that is limited to homeopathy.” And yet, they purposefully chose to not reference any literature that evaluated this problem in publication bias from clinical trails testing conventional medicine. A simple review of the literature would find that conventional medical trials have at least the same rate of publication bias as those reported upon that tested homeopathic medicines (2), to reviews of research that showed a much higher level of publication bias when reporting on conventional medical treatments (3).
The fact is that several media (4)(5) that have reported on this study have come to the mistaken conclusion that the results of homeopathic clinical trials are not to be trusted, and this biased conclusion stems from the Gartlehner articl...
Show MoreThe article by Gartlehner et al. [1] is interesting because it allows the homeopathic community to elaborate on potential publication bias in clinical trials of homeopathy. There are, however, several questionable elements: in the article, and in the announcement made on the BMJ web, it is concluded that there was a high proportion of trials not preregistered, but at the same time Gartlehner acknowledges in the press that over time there has been a substantial improvement in the preregistration of trials [2]; it is mentioned that homeopaths must improve, but at the same time it is implied that "homeopathy cannot work".
On the second point, it is worth mentioning that in the article Gartlehner et al cite two trials, one by Grimes [3] and the other by Grams [4]. These essays are based on a biased selection of literature and have elementary errors. For example, Grimes says that Jacques Benveniste's famous study was published in "1987" and that Madaleine Ennis' work was negative when in fact it was positive [5]. Grimes bases his conclusions on theoretical claims (a simple calculation of Avogadro's constant) and not on experimental studies that at the time were available (e.g. [6]). Grams, on the other hand, only cites some old articles from 1992 and 1993 without mentioning more recent studies (e.g. [7]).
References:
Show More1. Gartlehner G, Emprechtinger E, Hackl M, Gartlehner J, Nonninger J, et al. (2022). Assessing the magnitude...
To our great sadness on Wednesday, April 13th 2022, Ingeborg Griffioen, author of “Innovating in healthcare: perspective from a dual role” passed away, at the age of 50. She departed with acceptance of the inevitable and in connection to those she loved and who loved her.
Ingeborg was founder and owner of Panton design studio, specialized in healthcare. In 2016 she started a PhD research on the use of service design to support shared decision making. Less than one year later, her husband was diagnosed with pancreatic cancer. Ingeborg incorporated their experiences with his care trajectory in her research, which led to the development of MetroMapping, a method to support shared decision making (www.metromapping.org/en). During this development process, Ingeborg herself was diagnosed with breast cancer.
It was one of Ingeborg’s dreams that MetroMapping be further developed and implemented at a large scale. Even during her chemotherapy she contributed to the 4D PICTURE project proposal, which focuses on adapting, evaluating and implementing MetroMapping in hospitals throughout Europe. Early 2022, we received the news that Ingeborg’s dream will come true, as 4D PICTURE was selected for funding by Horizon Europe. Ingeborg wrote a beautiful testimonial for 4D PICTURE:
“As a designer I have worked for 25 years in healthcare settings and as a researcher I studied treatment decision-making. I know the importance of...
Show MorePages