As French Nuclear Medicine representatives, we read with great interest the article by Le Guludec et al. entitled: Rapid access to innovative medicinal products while ensuring relevant health technology assessment: Position of the French National Authority for Health. In this interesting and important position paper from the French Independent Health Technology body (HTAb) called “Haute Autorité de Santé” (HAS), the authors state that its recommendations derive from consultations with academics. Although we understand that accessibility to innovative drugs used for Positron Emission Tomography (PET) could be considered as a very ancillary issue by the HAS board that authored the paper, these PET imaging molecules are still considered as medicinal products from a regulatory standpoint and should be evaluated as such.
We regret that, to our knowledge, none of the academic members of the French Nuclear Medicine Society (SFMN) board were given the opportunity to draw attention to some of the specific features of the drugs commonly used in nuclear medicine by answering the questionnaire sent to the panelists (cf supplemental material). Indeed, we would clearly have answered “Yes” to the following questions:
- Are there specific methodological issues for Health Technology Assessment you wish to bring to our attention?
- Do you identify methodological issues relative to the assessment of innovative drugs in specific therapeutic areas?
We would also have be...
As French Nuclear Medicine representatives, we read with great interest the article by Le Guludec et al. entitled: Rapid access to innovative medicinal products while ensuring relevant health technology assessment: Position of the French National Authority for Health. In this interesting and important position paper from the French Independent Health Technology body (HTAb) called “Haute Autorité de Santé” (HAS), the authors state that its recommendations derive from consultations with academics. Although we understand that accessibility to innovative drugs used for Positron Emission Tomography (PET) could be considered as a very ancillary issue by the HAS board that authored the paper, these PET imaging molecules are still considered as medicinal products from a regulatory standpoint and should be evaluated as such.
We regret that, to our knowledge, none of the academic members of the French Nuclear Medicine Society (SFMN) board were given the opportunity to draw attention to some of the specific features of the drugs commonly used in nuclear medicine by answering the questionnaire sent to the panelists (cf supplemental material). Indeed, we would clearly have answered “Yes” to the following questions:
- Are there specific methodological issues for Health Technology Assessment you wish to bring to our attention?
- Do you identify methodological issues relative to the assessment of innovative drugs in specific therapeutic areas?
We would also have been grateful to have been given the chance to present the opinion of the SFMN regarding the following two key questions and the SWAT analysis proposed by the HAS:
- what could be useful clinical trial designs or methodological specificities to accelerate the access to innovative drugs?
- how the HAS could modify its methodology for Health Technology Assessment?
In addition to the opinion of the SFMN, we would like to bring to the authors’ and your readers’ attention the Pipame (1) and the Court of Auditors reports (2) , which stressed many years ago that access to medical imaging innovations is of tremendous importance for patients. They both concluded that overcoming ‘administrative’ hurdles regarding evaluation and reimbursement could facilitate access for patients and give a boost to French small-to-medium size enterprises (SMEs) by reducing the time to market. Indeed, it is common knowledge that until recently, French patients had to go abroad to get access to PET imaging for prostate cancer or Neuro-Endocrine Tumors (NETs), and that contributions to innovations by French SMEs in this domain are still few and far between.
Nuclear medicine uses radiolabeled drugs for targeted irradiation to treat malignant diseases. The SFMN is grateful to the Ministry of Health for allowing our patients to access molecular radiotherapy rapidly and for compassionate use. Even though it does not meet the HAS definition of targeted therapies stricto sensu, molecular radionuclide therapy is by nature a targeted therapy whose indications rely indisputably on PET imaging companion markers, hence the concept of ‘theranostics’. However, most radiolabeled drugs are used for diagnostic purposes, and the number of new PET procedures per year is sky-rocketing. (https://www.cnp-mn.fr/wp-content/uploads/2023/01/2021_Enquete-Nationale-... )
We would particularly like to stress that blinded randomized controlled trials (RCT), which are claimed to be the cornerstone of drug efficacy and toxicity assessment, are not relevant for assessing molecular imaging drugs for reasons of practicality and cost-effectiveness. In our opinion, the lack of appropriate key performance indicators for PET drugs has for many years been the white elephant in the HAS medical imaging room.
We were reassured to read that the HAS acknowledges that conditions may exist that make conducting RCT unreasonable. However, assessing the performance of medical imaging has nothing to do with any hypothetical deductive approaches deemed to benefit from this RCT exemption. PET imaging has entered an era of continuous progress in which mathematics and physics are giving rise to reliable metrics such as spatial resolution, detection sensitivity, and dosimetry. Thanks to histopathologic verification, it is possible to calculate diagnostic performance and likelihood ratios (LR) even in small cohorts. Not using these assessments, which are based on evidence that has a high level of certainty, has led to bizarre ranking decisions (added medical value; i.e Amélioration du Service Medical Rendu: ASMR V). In turn, this has led to French patients being hindered in their access to PET procedures that are performed daily in neighboring countries.
For example, take the ‘clinical case’ of 68Ga edotreotide, a radiopharmaceutical thought to be innovative in France while being used in standard practice elsewhere for the staging and follow up of NETs. The HAS concluded that 68Ga edotreotide has high clinical value (Service Médical Rendu ; SMR ) but that its ASMR is poor (Amelioration du service medical rendu ASMR rank = V) https://www.has sante.fr/portail/jcms/c_2758095/fr/iasotoc 05042017 avis ct15806) because of the lack of a clinically relevant comparator. Therefore, the HAS considered that somatostatin receptor scintigraphy with 111InOctreotide should be considered as the relevant comparator, even though it is no longer in use in many countries because of its notoriously poor performance. Therefore, the HAS recommended using 111InOctreotide upfront. Almost the same is happening regarding the use of 68GaPSMA for prostate cancer.
However,…
- Patient irradiation was not taken into consideration: 111In Octreotide delivered a 2.4-higher dose to the patient than 68Gaedotreotide, so complying with the HAS recommendation raised an ethical issue.
- 111In Octreotide scintigraphy requires patients to come twice to the hospital, so do we need any prospective evaluation to conclude that a 2-day procedure (historical comparator) is more expensive than a single-day procedure, whatever the impact on patient management?
- The likelihood ratio is about 12 and 6 for 68Gaédotréotide and 111InOctrotide, respectively (i.e. the risk of a false negative is 2-fold higher for a NET patient undergoing 111InOctrotide scintigraphy). Again the clinical benefit can be directly derived from these metrics.
The added clinical value was also considered poor because of the lack of evidence on patient management. This is a substantial methodological shift far beyond comparing diagnostic accuracy with a relevant comparator. This implies a much more ambitious methodology with the risk of an expensive trial. Considering the time spent to initiate, conduct and analyze such an RCT, it is more likely that the treatment used would be considered obsolete by the time of publication, if any. Moreover, the cost effectiveness of such an approach remains questionable. Numm et al estimate that for a cost of 100US$, profitability is reached at around 1 million doses per year (3; 4). Such a massive use of PET tracers costing more than 1000 US$ is simply not sustainable. Based on the De Massi (5, 6) et al and Numm (3; 4) et al analysis of the radiopharmaceutical market, it is very unlikely that an SME could spend around 200 M US$ in developing a PET drug, especially since PET procedures are reimbursed independently (at least in France) of the cost of the molecule, whatever its ASMR.
However, some years ago the HAS came up with an interesting alternative (HAS NOTE DE CADRAGE Place de la technique du ganglion sentinelle dans la stratégie diagnostique de l’envahissement ganglionnaire d’un cancer du sein à un stade précoce sept 2011 www.has-sante.fr). Considering that the predictive values of a test can be easily derived from the LR and the pretest probabilities (6) (i.e. prevalence, TNM stage), the HAS demonstrated that cost-effectiveness studies can be used to determine the value of sentinel node scintigraphy for breast cancer surgery. Therefore, the same methodology could be easily applied to pragmatic trials or real-world cohorts of PET patients. The SFMN has created a real-registry that provides more reliable data than the PMSI, as it also includes outpatients and more detailed data than that collected in early or compassionate use programs. The theragnostic market is expected to grow from 4BUS$ in 2013 to 14BUS$ in 2025 : Richard Zimmerman, Oncidium Foundation
We agree with the HAS that one should avoid overusing the term ‘innovation’ when discussing any drug candidate. However, nuclear medicine has come a long way since 2007 when Adrian Nunn pointed out the fact the approval of radiopharmaceuticals was at an all-time low, with all the major radiopharmaceutical agents in use having been approved over 10 years before. Recent successes such as the Food and Drug Administration approval of Lutathera and NETSPOT have resulted in an increasing number of pharmaceutical companies pursing theragnostics, with further impetus provided by the purchase of Advanced Accelerator Applications by Novartis for 3.9 billion and Endocyte, Inc for 2.1 billion (8).
Times are changing; it’s time to rethink matters collectively in the quest to discover a more efficient and agile way to assess medical imaging innovations.
We remain at the disposal of the HAS to continue this discussion.
Référence
1. Pôle interministériel de Prospective et d'Anticipation des Mutations économiques (PIPAME)http://competitivite.gouv.fr/documents/commun/Documentation_poles/etudes...
2. Cours des comptes L’imagerie médicale –mai 2016 Cour des comptes -www.ccomptes.fr-@Courdescomptes
3. Nunn A D The Cost of Bringing a Radiopharmaceutical to the Patient’s Bedside J. Nucl Med 2007;48 : 169
4. Nunn A D The cost of developing imaging agents for routine clinical use Invest Radiol 2006; 41 : 206 2012
5. DiMasi, J.A., R.W. Hansen, et H.G. Grabowski (2003), The price of innovation: New estimates of Drug Development Costs, Journal of Health Economics 22, 151-185.
6. DiMasi, J.A., R.W., Hansen, H.G. Grabowski, et L. Lasagna (1991), Cost of innovation in the pharmaceutical industry, Journal of Health Economics 10, 107-142.
7. Fagan. Letter: Nomogram for Bayes theorem N Engl J Med 1975; 293:25
8. Cathy S. Cutler, PhD Economics of New Molecular Targeted Personalized Radiopharmaceuticals Semin Nucl Med 2019 49:450-457
We were surprised that BMJ Evidence Based Medicine chose to publish the flawed article by Høeg and co-authors on methodological limitations of research on long COVID (1). This piece appears to be a ‘Trojan Horse’ article where a scientifically dubious proposition escapes proper scrutiny because it is cloaked in otherwise plausible research commentary.
As the authors state, we need well designed studies to provide a valid measure of the long-term effects of acute COVID-19 infection (Long COVID). Such studies require robust case definitions, adequate duration of follow-up, and suitable comparison groups.
But in a section titled “The most well-designed studies provide reassuring estimates”, the authors include just two studies to support that sweeping statement. This highly selective ‘mini meta-analysis’ subverts the very purpose of evidence-based medicine. The main message of the Høeg paper appears to be that there is a negligible risk of long COVID, based on the selection of papers they have cited. That message does not fit with the actual body of scientific evidence (2). There is now overwhelming research that SARS-CoV-2 infection carries a significant risk of long-term effects over and above the generic effects of post-ICU syndrome and pneumonia (3).
The evidence of long-term effects comes from multiple sources, including epidemiological studies and basic science research looking at the severe and lasting pathological changes that occur in some pati...
We were surprised that BMJ Evidence Based Medicine chose to publish the flawed article by Høeg and co-authors on methodological limitations of research on long COVID (1). This piece appears to be a ‘Trojan Horse’ article where a scientifically dubious proposition escapes proper scrutiny because it is cloaked in otherwise plausible research commentary.
As the authors state, we need well designed studies to provide a valid measure of the long-term effects of acute COVID-19 infection (Long COVID). Such studies require robust case definitions, adequate duration of follow-up, and suitable comparison groups.
But in a section titled “The most well-designed studies provide reassuring estimates”, the authors include just two studies to support that sweeping statement. This highly selective ‘mini meta-analysis’ subverts the very purpose of evidence-based medicine. The main message of the Høeg paper appears to be that there is a negligible risk of long COVID, based on the selection of papers they have cited. That message does not fit with the actual body of scientific evidence (2). There is now overwhelming research that SARS-CoV-2 infection carries a significant risk of long-term effects over and above the generic effects of post-ICU syndrome and pneumonia (3).
The evidence of long-term effects comes from multiple sources, including epidemiological studies and basic science research looking at the severe and lasting pathological changes that occur in some patients following SARS-CoV-2 infection. This research shows that COVID-19 is a multisystem disease that can cause microclots, changes to the immune system, viral persistence in tissues, and other effects even in mild cases (4). These effects provide a basis for well-described sequelae of COVID-19 such as impaired brain function, extreme fatigue, and stroke.
Furthermore, tissue-level effects can be clinically silent but they include known risk factors for heart disease and other conditions, raising concerns for future population health. Viral persistence suggests additional potential for long-term effects that may take years or decades to emerge, as we have seen with other infections (5).
Even the research selectively cited in the Hoeg article is not as reassuring as the authors imply. Antonelli et al. reported an overall lower risk of Long COVID for Omicron compared with Delta infection (6). But they noted that Omicron variants have caused far higher case numbers and the conclusion of their article is that "future numbers with long COVID will inevitably rise".
Full declaration of potential conflicts of interest is important in evidence-based medicine. Yet at least two of the authors here have made no secret of their unorthodox scientific views on the origins, transmission, severity and prevention of COVID-19 and appear to be associated with particular ideological positions and organisations (7, 8).
Rather than the robust science we might reasonably expect from the highly regarded BMJ Evidence-Based Medicine, the Høeg paper serves to illustrate the very biases and errors which evidence-based medicine was established to challenge. It appears to be a ‘Trojan Horse’ from a partisan group. Its publication significantly damages the reputation of the journal as a platform for rigorous, principled, and balanced scientific debate.
References
1. Hoeg TB, Ladhani S, Prasad V. How methodological pitfalls have created widespread misunderstanding about long COVID. BMJ Evid Based Med 2023 doi: 10.1136/bmjebm-2023-112338 [published Online First: 2023/09/26]
2. Altmann DM, Whettlock EM, Liu S, et al. The immunology of long COVID. Nature reviews Immunology 2023;23(10):618-34. doi: 10.1038/s41577-023-00904-7 [published Online First: 2023/07/12]
3. Bowe B, Xie Y, Al-Aly Z. Postacute sequelae of COVID-19 at 2 years. Nat Med 2023;29(9):2347-57. doi: 10.1038/s41591-023-02521-2 [published Online First: 2023/08/22]
4. Castanares-Zapatero D, Chalon P, Kohn L, et al. Pathophysiology and mechanism of long COVID: a comprehensive review. Annals of medicine 2022;54(1):1473-87. doi: 10.1080/07853890.2022.2076901 [published Online First: 2022/05/21]
5. Chen B, Julg B, Mohandas S, et al. Viral persistence, reactivation, and mechanisms of long COVID. eLife 2023;12 doi: 10.7554/eLife.86015 [published Online First: 2023/05/04]
6. Antonelli M, Pujol JC, Spector TD, et al. Risk of long COVID associated with delta versus omicron variants of SARS-CoV-2. Lancet 2022;399(10343):2263-64. doi: 10.1016/S0140-6736(22)00941-2 [published Online First: 2022/06/20]
7. Bragman W. New scientist group calling for pandemic answers is tied to rightwing Dark Money. OptOut, 2023; https://www.optout.news/newsletters/norfolk-group.
8. Prasad V. Do not report COVID cases to schools & do not test yourself if you feel ill. Vinay Prasad's Observations and Thoughts, 2023; https://vinayprasadmdmph.substack.com/p/do-not-report-covid-cases-to-sch....
Dr Juan Franco
Editor-In-Chief
BMJ Evidence Based Medicine
BMA House
Tavistock Square
London WC1H 9JP
UNITED KINGDOM
31 October 2023
Dear Editor-In-Chief,
We read with interest the recent article by Høeg and colleagues that describes how methodological limitations in long COVID research distort risk and overestimate prevalence.[1]
The authors propose criteria to improve epidemiological research of long COVID. We write in support of these criteria, and to suggest two additions. We recently compared outcomes three months after PCR-confirmed COVID-19 infection with PCR-confirmed influenza infection, and found no difference between these illnesses.[2] Our comparative observational study had limitations (which we acknowledged) but was noteworthy because it was conducted in an Australian population that was primarily exposed to the Omicron variant after achieving high vaccination rates (>90%).
As a result, our two proposed additions to Høeg et al’s criteria relate to the exposed population which, as they suggest, should have diagnostic evidence of infection.
The first addition is to document the COVID variant to which this population was exposed. Recent data from Sweden shows a progressive (and substantial) decrease in the risk of long COVID from the wild type to the Omicron variant.[3] In addition, the type and frequency of symptoms has changed as the virus evolves.[4] This inclusion would improv...
Dr Juan Franco
Editor-In-Chief
BMJ Evidence Based Medicine
BMA House
Tavistock Square
London WC1H 9JP
UNITED KINGDOM
31 October 2023
Dear Editor-In-Chief,
We read with interest the recent article by Høeg and colleagues that describes how methodological limitations in long COVID research distort risk and overestimate prevalence.[1]
The authors propose criteria to improve epidemiological research of long COVID. We write in support of these criteria, and to suggest two additions. We recently compared outcomes three months after PCR-confirmed COVID-19 infection with PCR-confirmed influenza infection, and found no difference between these illnesses.[2] Our comparative observational study had limitations (which we acknowledged) but was noteworthy because it was conducted in an Australian population that was primarily exposed to the Omicron variant after achieving high vaccination rates (>90%).
As a result, our two proposed additions to Høeg et al’s criteria relate to the exposed population which, as they suggest, should have diagnostic evidence of infection.
The first addition is to document the COVID variant to which this population was exposed. Recent data from Sweden shows a progressive (and substantial) decrease in the risk of long COVID from the wild type to the Omicron variant.[3] In addition, the type and frequency of symptoms has changed as the virus evolves.[4] This inclusion would improve our understanding of post-viral impacts by variant, and add important context to Høeg et al’s sensible suggestion of a symptom-based approach to support patients.
The second addition involves documenting the population’s vaccination status, including (if possible) the time since last dose. A systematic review found that COVID-19 vaccination could protect against long COVID, but also noted that study quality was generally low for the reasons argued by Høeg et al.[5] While ongoing reinfections and decreased diagnostic testing may cloud the benefits of vaccination, vaccination status offers important insights into the symptoms and impacts of each COVID variant.
Finally, we concur with the article’s observations about the ongoing impact of regular negative reports about long COVID. We have heard people say they are “more afraid of getting long COVID than they are of getting COVID”. Høeg and colleagues have provided a framework to challenge the many inflated claims that may contribute to this fear and anxiety. While we believe and support those who experience post-viral effects, we must remember the most likely outcome after COVID-19 infection is a full recovery.
Yours sincerely,
Matthew Brown (Program Manager, Queensland Long COVID Response)
John Gerrard (Chief Health Officer, Queensland)
Ross Andrews (Senior Consultant Epidemiologist)
References
1 Hoeg TB, Ladhani S, Prasad V. How methodological pitfalls have created widespread misunderstanding about long COVID. BMJ Evid Based Med 2023.
2 Brown M, Gerrard J, McKinlay L, et al. Ongoing symptoms and functional impairment 12 weeks after testing positive for SARS-CoV-2 or influenza in Australia: an observational cohort study. BMJ Public Health 2023; 1(1).
3 Hedberg P, Naucler P. Post COVID-19 condition after SARS-CoV-2 infections during the omicron surge compared with the delta, alpha, and wild-type periods in Stockholm, Sweden. J Infect Dis 2023.
4 Looi MK. How are covid-19 symptoms changing? BMJ 2023; 380: 3.
5 Byambasuren O, Stehlik P, Clark J, et al. Effect of covid-19 vaccination on long covid: systematic review. BMJ Medicine 2023; 2(1).
I want to express my concern regarding “Curcumin and proton pump inhibitors for functional dyspepsia: a randomised, double blind controlled trial” by Kongkam et al(1). It was published against the journal’s editorial policy and has serious issues with reporting and interpretation of results.
The article shouldn’t have been published in the first place. It lacks prospective registration, which directly contradicts the BMJ Evidence-based medicine editorial policy stating that a prospective registration is mandatory for any clinical trials(2). The Thai Clinical Trials Registry(3) registration TCTR20221208003 is retrospective which is clearly stated in the registry. The registration was submitted on 07 December 2022, just before a preprint was posted on medRxiv on 09 December 2022, while the study was completed on 30 April 2020.
On top of that, there are serious issues with the reporting and interpretation of results.
According to the authors an equivalence design was used with the equivalence margin of 2 points in the SODA score. Nine comparisons of SODA scores in the curcumin plus omeprazole (C+O), curcumin only (C), and omeprazole only (O) groups were reported. For three of those confidence intervals include equivalence margin. The only available interpretation here is that the trial failed to demonstrate equivalence. To demonstrate equivalence the confidence intervals should be between the two equivalence margins rath...
I want to express my concern regarding “Curcumin and proton pump inhibitors for functional dyspepsia: a randomised, double blind controlled trial” by Kongkam et al(1). It was published against the journal’s editorial policy and has serious issues with reporting and interpretation of results.
The article shouldn’t have been published in the first place. It lacks prospective registration, which directly contradicts the BMJ Evidence-based medicine editorial policy stating that a prospective registration is mandatory for any clinical trials(2). The Thai Clinical Trials Registry(3) registration TCTR20221208003 is retrospective which is clearly stated in the registry. The registration was submitted on 07 December 2022, just before a preprint was posted on medRxiv on 09 December 2022, while the study was completed on 30 April 2020.
On top of that, there are serious issues with the reporting and interpretation of results.
According to the authors an equivalence design was used with the equivalence margin of 2 points in the SODA score. Nine comparisons of SODA scores in the curcumin plus omeprazole (C+O), curcumin only (C), and omeprazole only (O) groups were reported. For three of those confidence intervals include equivalence margin. The only available interpretation here is that the trial failed to demonstrate equivalence. To demonstrate equivalence the confidence intervals should be between the two equivalence margins rather than include them. The fact that “no significant differences were observed among the three groups” is also fully irrelevant, it does not demonstrate equivalence, as the latter cannot be claimed on the basis of nonsignificant tests(4).
Another striking deficiency in reporting is the unexplained loss to follow-up difference between the study arms. While 17 participants were lost to follow-up in the curcumin plus omeprazole and 17 in the curcumin only groups, only 1 was lost to follow-up in the omeprazole only arm. At the same time, the numbers of subjects who withdrew consent were also noticeably different – 2, 2 and 18 respectively. This difference is unlikely to have arisen by chance alone. There are two possible explanations: either the numbers of subjects who withdrew consent and those lost to follow-up were mistakenly swapped, or the loss to follow-up was systematically different in the omeprazole only arm compared to the two arms with curcumin. The former would question the peer review, the latter would question the blinding. Given the taste and smell of curcumin the blinding should have been questioned even if there were no differences in attrition.
It’s hard to believe that CONSORT requirements, explicitly stating that the interpretation must be consistent with the results and the primary and secondary outcome measures need to be completely defined and pre-specified, were sufficiently taken into consideration during the review process. I hope appropriate actions will be taken by the journal to correct for that.
(1) Kongkam P, Khongkha W, Lopimpisuth C, et alCurcumin and proton pump inhibitors for functional dyspepsia: a randomised, double blind controlled trialBMJ Evidence-Based Medicine 2023;28:399-406.
(4) Piaggio G, Elbourne DR, Pocock SJ, Evans SJW, Altman DG, CONSORT Group FT. Reporting of Noninferiority and Equivalence Randomized Trials: Extension of the CONSORT 2010 Statement. JAMA. 2012;308(24):2594–2604. doi:10.1001/jama.2012.87802
I am writing to request further clarification on the paper “Likelihood ratio interpretation of the relative risk”. The “key messages” section of this paper states that the study adds the following to the literature:
⇒ It is demonstrated that the conventional interpretation of the relative risk is in conflict with Bayes’ theorem.
⇒ The interpretation of the relative risk as a likelihood ratio connecting prior (unconditional) intervention risk to outcome conditional intervention risk is required to avoid conflict with Bayes’ Theorem
I will refer to the first bullet point as “Doi’s Conjecture”. Doi’s Conjecture is also stated in the second section of the main text, where it is claimed that “the usual interpretation (33% increase in the +ve outcome under treatment) contravenes Bayes Theorem”.
No attempt is made within the text to prove Doi’s Conjecture. But perhaps more worryingly, no attempt is made to define the term “interpretation”, a term which is not defined in standard probability theory. The meaning of Doi’s Conjecture is therefore at best ambiguous. Moreover, the manuscript relies substantially on claims about how effect measures are “perceived”, another term which is defined neither in probability theory not in the manuscript.
The relative risk is defined as the risk of the outcome under treatment, divided by the risk of the outcome under the control condition; that is, as a ratio of two probabilities. Thi...
I am writing to request further clarification on the paper “Likelihood ratio interpretation of the relative risk”. The “key messages” section of this paper states that the study adds the following to the literature:
⇒ It is demonstrated that the conventional interpretation of the relative risk is in conflict with Bayes’ theorem.
⇒ The interpretation of the relative risk as a likelihood ratio connecting prior (unconditional) intervention risk to outcome conditional intervention risk is required to avoid conflict with Bayes’ Theorem
I will refer to the first bullet point as “Doi’s Conjecture”. Doi’s Conjecture is also stated in the second section of the main text, where it is claimed that “the usual interpretation (33% increase in the +ve outcome under treatment) contravenes Bayes Theorem”.
No attempt is made within the text to prove Doi’s Conjecture. But perhaps more worryingly, no attempt is made to define the term “interpretation”, a term which is not defined in standard probability theory. The meaning of Doi’s Conjecture is therefore at best ambiguous. Moreover, the manuscript relies substantially on claims about how effect measures are “perceived”, another term which is defined neither in probability theory not in the manuscript.
The relative risk is defined as the risk of the outcome under treatment, divided by the risk of the outcome under the control condition; that is, as a ratio of two probabilities. This manuscript appears to claim that “interpreting” the relative risk in the manner that it is defined, is inconsistent with Bayes’ Theorem, a fundamental result in probability theory. If this is true, probability theory is in deep conceptual trouble.
There are multiple correct (and mathematically equivalent) ways to represent effects within a study, to predict risk under treatment and to determine the posterior probability. This manuscript provides no coherent reason for one valid approach to take precedence over another valid approach.
Title: “Claims about the main claim”
Author: Suhail A, Doi, Polychronis, Kostoulas, Paul, Glasziou
In response to the published article "Likelihood ratio interpretation of the relative risk"
Rapid response :
September 16, 2022
The problem in evidence-based medicine arises when we port relative risks derived from one study to settings with different baseline risks. For example, a baseline risk of 0.2 and treated risk of 0.4 for an event in a trial gives a RR of 2 (0.4/0.2) and the complementary cRR of 0.75 (0.6/0.8). Thus the ratio of LRs (RR/cRR) is 2/0.75 = 2.67. If applied to a baseline risk of 0.5 the predicted risk under treatment with the RR “interpretation” is 1.0 but with the ratio of LRs “interpretation” is 0.73. Here, the interpretation of the risk ratio as a likelihood ratio, using Bayes’ theorem, clearly gives different results, and solves the problem of impossible risks as clearly depicted in the manuscript and the example.
If, in our effort to highlight the need of this correct interpretation, we have used strong wording that annoyed the commentator we feel the need to express regret. We hope that the commentator could also feel similarly for his scientifically unbecoming choice of wording that culminated with “Doi’s Conjecture”.
Conflict of Interest
None declared
I would like to congratulate Dr. Abbott and her team in generating this piece of important and interesting article, which applied the methods of meta-science to the early systematic review articles and the infodemics related to COVID-19.
Indeed, the COVID-19 pandemic came quick and ferocious, starting early 2020 and lasted till recently and with new possible variants emerging, it still presents the medical community and indeed the scientific circles with challenging question. Thankfully to the selfless work of researchers, patients and frontline medical staffs, we now have some valuable means to deal with this Pandemic.
The research community was presented with a rather challenging task of designing and conducting researches to answer important questions relating to the new infectious diseases at the time of early 2020. The “new” corona virus was ravaging parts of our world without checks. So studies were conducted at pace, which unfortunately resulted in much duplicated and poor methodological studies conducted. But on the other hand, the sheer volume of studies itself may be useful as it generated evidence to inform us of what does and what does not work in terms combating the COVID-19. For example, dexamethasone (RECOVERY trial) was found to be essential for severe COVID-19 patients and the use of Hydroxychloroquine is ineffective for COVID-19.
Having said this, I must state that I am not in support of the generation of poor quality clinical studi...
I would like to congratulate Dr. Abbott and her team in generating this piece of important and interesting article, which applied the methods of meta-science to the early systematic review articles and the infodemics related to COVID-19.
Indeed, the COVID-19 pandemic came quick and ferocious, starting early 2020 and lasted till recently and with new possible variants emerging, it still presents the medical community and indeed the scientific circles with challenging question. Thankfully to the selfless work of researchers, patients and frontline medical staffs, we now have some valuable means to deal with this Pandemic.
The research community was presented with a rather challenging task of designing and conducting researches to answer important questions relating to the new infectious diseases at the time of early 2020. The “new” corona virus was ravaging parts of our world without checks. So studies were conducted at pace, which unfortunately resulted in much duplicated and poor methodological studies conducted. But on the other hand, the sheer volume of studies itself may be useful as it generated evidence to inform us of what does and what does not work in terms combating the COVID-19. For example, dexamethasone (RECOVERY trial) was found to be essential for severe COVID-19 patients and the use of Hydroxychloroquine is ineffective for COVID-19.
Having said this, I must state that I am not in support of the generation of poor quality clinical studies, but what I am saying is that different clinical / public health circumstances may call for different overall research strategies, sometimes we need quick answers to important clinical questions, then unfortunately, quality is often sacrificed/overlooked for the sake of the generation of "rapid" evidence, and the early phase of the COVID-19 pandemic may fall into this category. A relative lake of research capacity and resources may also contributed to this unfortunate phenomenon as we just do not have the means to generate massive amount of high quality researches in a short period of time.
It is my view that medical research can perhaps be broadly divided into four categories:
1). Very Important and urgent: such as the COVID-19 or any past or future public health emergencies
2). Important but not urgent: hypertension, diabetes etc.
3). Not as important but urgent: diagnosis or care of rare genetic diseases etc.
4). Least important and not urgent: patient flow, drug compliance issues etc.
I would concur with Dr. Abbott and her team and both the authors and the editors of the scientific community share the "duty" of ensuring and also improve the quality of systematic reviews. As an educator, we are equally responsible for providing future clinicians and scholars with the knowledge and skills to conduct high quality studies. Furthermore, meta-researches are as important to provide us with a scoping view of the evidences of review articles and to remind us of the need of research vigilance. Lastly it is perhaps also the responsibility of the policy makers and governments to ensure the persistent and appropriate resources are channeled into the research community, especially in times of future public health emergencies.
Dear Editor,
This response is in relation to the titled article above published in June 2019. Firstly, I would like to commend the outstanding work of research done. While reading the article, I understood the correlation between the nursing field, evidence-based research, and ways in which patients benefit from current health practices. Furthermore, the research conducted a wide range of research benefits in other nursing career paths globally. It showed experts views on teaching evidenced based prospectus, evidence-based deliberations, and stakeholders’ engagement which can impact patients involved. I agree with the study conducted and how research is essential for future advancements as well as improvements in care to patients. Unfortunately, there aren’t as much published research work in The Bahamas on evidence-based practices from an expert view. Through further research this thesis can become widespread to obtain more views on this pressing matter.
We fully agree that „non-publication of trial results and selective outcome reporting…is not a phenomenon that is limited to homeopathy.”
Previous reviews in conventional medicine, such as the study by Kosa et al. in 2018, report „…substantive disagreement in reporting between publications and current clinical trial registry, which were associated with several study characteristics”.[1]
In 2019 The Lancet commented on the reporting of clinical trial data for 30 European universities that sponsor the largest number of trials governed by EU clinical trials regulation: “The report shows that 778 (83%) of 940 clinical trials sponsored by these universities due to post their results on the EU Clinical trials Register (EudraCT) had not done so”.[2]
The International Committee of Medical Journal Editors (ICMJE) announced in 2005 that “… trials that begin enrolment of patients after 1 July 2005 must register in a public trials registry at or before the onset of enrolment to be considered for publication …”.[3] EU rules took effect in 2014, which require all clinical trials registered in EudraCT to post summary results within 12 months of study completion.[2] Hence, the inclusion of studies on homeopathy published before and in 2005 by Gartlehner et al. 2022 does not seem reasonable respectively of those published before and in 2014 is debatable.
Notwithstanding the above, precise information on sub-groups of studies was not given by Gartlehner et al. 202...
We fully agree that „non-publication of trial results and selective outcome reporting…is not a phenomenon that is limited to homeopathy.”
Previous reviews in conventional medicine, such as the study by Kosa et al. in 2018, report „…substantive disagreement in reporting between publications and current clinical trial registry, which were associated with several study characteristics”.[1]
In 2019 The Lancet commented on the reporting of clinical trial data for 30 European universities that sponsor the largest number of trials governed by EU clinical trials regulation: “The report shows that 778 (83%) of 940 clinical trials sponsored by these universities due to post their results on the EU Clinical trials Register (EudraCT) had not done so”.[2]
The International Committee of Medical Journal Editors (ICMJE) announced in 2005 that “… trials that begin enrolment of patients after 1 July 2005 must register in a public trials registry at or before the onset of enrolment to be considered for publication …”.[3] EU rules took effect in 2014, which require all clinical trials registered in EudraCT to post summary results within 12 months of study completion.[2] Hence, the inclusion of studies on homeopathy published before and in 2005 by Gartlehner et al. 2022 does not seem reasonable respectively of those published before and in 2014 is debatable.
Notwithstanding the above, precise information on sub-groups of studies was not given by Gartlehner et al. 2022. Hence, the conclusion, “This likely affects the validity of the body of evidence of homeopathic literature and may overestimate the true treatment effect of homeopathic remedies”, based on those studies with modified or switched primary outcome measure or the point of time of assessment, is not adequately justified.
Previous studies published in the BMJ which looked at reporting bias in all medical fields showed that (a) half of all registered clinical trials in conventional medicine fail to report their results within a 12 month period, whereas according to Gartlehner et al. 2022 62% of all registered homeopathy trials reach publication,[4] and that (b) inconsistencies in reporting of primary outcome occur in 43% of conventional medical studies, whilst according to Gartlehner et al. 2022 this occurs in only 25% of published homeopathy trials.[5]
Hence, the most interesting finding is that homeopathy is out-performing conventional medicine in this respect, with lower levels of reporting bias.
References:
1. Kosa SD, Mbuagbaw L, Debono VB, et al. Agreement in reporting between trial publications and current clinical trial registry in high impact journals: a methological review. Contemp Clin Trials 2018;65:144-150. https://pubmed.ncbi.nlm.nih.gov/29287666/
2. Clinical trial registry reporting: a transparent solution needed. Lancet Oncol 2019; 20:741. https://www.thelancet.com/action/showPdf?pii=S1470-2045%2819%2930350-X
3. Kamran A. Compulsory registration of clinical trials. BMJ 2004;329:637. https://www.bmj.com/content/329/7467/637
4. Goldacre B, DeVito NJ, Heneghan C, et al. Compliance with requirement to report results on the EU Clinical Trials Register: cohort study and web resource. BMJ 2018;362:k3218. https://pubmed.ncbi.nlm.nih.gov/30209058/
5. Shah K, Egan G, Huan LN, et al. Outcome reporting bias in Cochrane systematic reviews: a cross-sectional analysis. BMJ Open 2020;10(3):e032497. https://pubmed.ncbi.nlm.nih.gov/32184303/
Michael Frass, M.D.
Em. Professor of Medicine
Medical University of Vienna,
Vienna, Austria
Menachem Oberbaum, MD, FFHom (Lond.)
The Center for Integrative Complementary Medicine
Shaare Zedek Medical Center
Jerusalem, Israel
In their recent paper, Gartlehner et al [1] reached the headline conclusion that ‘effect estimates of meta-analyses of homeopathy trials might substantially overestimate the true treatment effect of homeopathic remedies’. Their conclusion is based on having re-analysed one of the systematic review papers’ data published by Mathie et al [2] by taking into account the possible impact of a trial’s registration status. Gartlehner et al analysed a sub-set of 19 trials of non-individualised homeopathic treatment, comparing 6 trials that were registered with 13 trials that were not registered. They observed a statistically significant difference between homeopathy and placebo only for the non-registered trials; however, the difference in effect sizes between registered and non-registered trials did not reach statistical significance.
In conducting their re-analysis, Gartlehner et al have failed to recognise that the meta-analysis by Mathie et al [2] was primarily based on a sensitivity analysis of trials that comprised reliable evidence (effectively, low risk of bias): the effect-size estimate collectively for those 3 trials yielded a statistically non-significant result. Those 3 trials are amongst the 6 registered trials in Gartlehner’s re-analysis, and so it is no surprise that they contributed to a non-significant pooled effect size. A majority of the other 13 trials, now defined as non-registered [1], had previously been categorised by Mathie et al as high risk of bias...
In their recent paper, Gartlehner et al [1] reached the headline conclusion that ‘effect estimates of meta-analyses of homeopathy trials might substantially overestimate the true treatment effect of homeopathic remedies’. Their conclusion is based on having re-analysed one of the systematic review papers’ data published by Mathie et al [2] by taking into account the possible impact of a trial’s registration status. Gartlehner et al analysed a sub-set of 19 trials of non-individualised homeopathic treatment, comparing 6 trials that were registered with 13 trials that were not registered. They observed a statistically significant difference between homeopathy and placebo only for the non-registered trials; however, the difference in effect sizes between registered and non-registered trials did not reach statistical significance.
In conducting their re-analysis, Gartlehner et al have failed to recognise that the meta-analysis by Mathie et al [2] was primarily based on a sensitivity analysis of trials that comprised reliable evidence (effectively, low risk of bias): the effect-size estimate collectively for those 3 trials yielded a statistically non-significant result. Those 3 trials are amongst the 6 registered trials in Gartlehner’s re-analysis, and so it is no surprise that they contributed to a non-significant pooled effect size. A majority of the other 13 trials, now defined as non-registered [1], had previously been categorised by Mathie et al as high risk of bias [2]. If non-registration contributes to higher risk of bias, producing an overestimated effect size, then Gartlehner et al have discovered nothing new, and it is misleading that they have portrayed the Mathie et al paper [2] as one that could ‘substantially overestimate the true treatment effect of homeopathic remedies’ because it did not reflect the registration status of its analysed trials.
Moreover, the findings of another systematic review by Mathie et al (on clinical trials of individualised homeopathic treatment [3]) also remain unaffected by the Gartlehner et al paper. For the three trials with reliable evidence in that review, sensitivity analysis revealed a small but statistically significant effect size favouring homeopathy [3]. Gartlehner et al were unable to re-analyse these data effectively because of a paucity of registered trials – an observation that is also not surprising, since only 3 of 22 of those homeopathy trials were published after 2007, a year in which fewer than 40% of published trials in conventional medicine were registered [4]. Thus the headline findings from the two systematic reviews and meta-analyses published by Mathie et al [2, 3] stand intact.
References
1. Gartlehner G, Emprechtinger R, Hackl M, et al. Assessing the magnitude of reporting bias in trials of
homeopathy: a cross-sectional study and meta-analysis BMJ Evidence-Based Medicine. Epub ahead of print: 15 March 2022. doi:10.1136/bmjebm-2021-111846.
2. Mathie RT, Ramparsad N, Legg LA, et al. Randomised, double-blind, placebo-controlled trials of non-individualised homeopathic treatment: systematic review and meta-analysis. Syst Rev 2017;6:63.
3. Mathie RT, Lloyd SM, Legg LA, et al. Randomised placebo-controlled trials of individualised homeopathic treatment: systematic review and meta-analysis. Syst Rev 2014;3:142.
4. Wong EKC, Lachance CC, Page J, et al. Selective reporting bias in randomised controlled trials from two network meta-analyses: comparison of clinical trial registrations and their respective publications. BMJ Open 2019;9:e031138.
As French Nuclear Medicine representatives, we read with great interest the article by Le Guludec et al. entitled: Rapid access to innovative medicinal products while ensuring relevant health technology assessment: Position of the French National Authority for Health. In this interesting and important position paper from the French Independent Health Technology body (HTAb) called “Haute Autorité de Santé” (HAS), the authors state that its recommendations derive from consultations with academics. Although we understand that accessibility to innovative drugs used for Positron Emission Tomography (PET) could be considered as a very ancillary issue by the HAS board that authored the paper, these PET imaging molecules are still considered as medicinal products from a regulatory standpoint and should be evaluated as such.
Show MoreWe regret that, to our knowledge, none of the academic members of the French Nuclear Medicine Society (SFMN) board were given the opportunity to draw attention to some of the specific features of the drugs commonly used in nuclear medicine by answering the questionnaire sent to the panelists (cf supplemental material). Indeed, we would clearly have answered “Yes” to the following questions:
- Are there specific methodological issues for Health Technology Assessment you wish to bring to our attention?
- Do you identify methodological issues relative to the assessment of innovative drugs in specific therapeutic areas?
We would also have be...
We were surprised that BMJ Evidence Based Medicine chose to publish the flawed article by Høeg and co-authors on methodological limitations of research on long COVID (1). This piece appears to be a ‘Trojan Horse’ article where a scientifically dubious proposition escapes proper scrutiny because it is cloaked in otherwise plausible research commentary.
As the authors state, we need well designed studies to provide a valid measure of the long-term effects of acute COVID-19 infection (Long COVID). Such studies require robust case definitions, adequate duration of follow-up, and suitable comparison groups.
But in a section titled “The most well-designed studies provide reassuring estimates”, the authors include just two studies to support that sweeping statement. This highly selective ‘mini meta-analysis’ subverts the very purpose of evidence-based medicine. The main message of the Høeg paper appears to be that there is a negligible risk of long COVID, based on the selection of papers they have cited. That message does not fit with the actual body of scientific evidence (2). There is now overwhelming research that SARS-CoV-2 infection carries a significant risk of long-term effects over and above the generic effects of post-ICU syndrome and pneumonia (3).
The evidence of long-term effects comes from multiple sources, including epidemiological studies and basic science research looking at the severe and lasting pathological changes that occur in some pati...
Show MoreDr Juan Franco
Editor-In-Chief
BMJ Evidence Based Medicine
BMA House
Tavistock Square
London WC1H 9JP
UNITED KINGDOM
31 October 2023
Dear Editor-In-Chief,
We read with interest the recent article by Høeg and colleagues that describes how methodological limitations in long COVID research distort risk and overestimate prevalence.[1]
The authors propose criteria to improve epidemiological research of long COVID. We write in support of these criteria, and to suggest two additions. We recently compared outcomes three months after PCR-confirmed COVID-19 infection with PCR-confirmed influenza infection, and found no difference between these illnesses.[2] Our comparative observational study had limitations (which we acknowledged) but was noteworthy because it was conducted in an Australian population that was primarily exposed to the Omicron variant after achieving high vaccination rates (>90%).
As a result, our two proposed additions to Høeg et al’s criteria relate to the exposed population which, as they suggest, should have diagnostic evidence of infection.
The first addition is to document the COVID variant to which this population was exposed. Recent data from Sweden shows a progressive (and substantial) decrease in the risk of long COVID from the wild type to the Omicron variant.[3] In addition, the type and frequency of symptoms has changed as the virus evolves.[4] This inclusion would improv...
Show MoreDear Editorial Office,
I want to express my concern regarding “Curcumin and proton pump inhibitors for functional dyspepsia: a randomised, double blind controlled trial” by Kongkam et al(1). It was published against the journal’s editorial policy and has serious issues with reporting and interpretation of results.
The article shouldn’t have been published in the first place. It lacks prospective registration, which directly contradicts the BMJ Evidence-based medicine editorial policy stating that a prospective registration is mandatory for any clinical trials(2). The Thai Clinical Trials Registry(3) registration TCTR20221208003 is retrospective which is clearly stated in the registry. The registration was submitted on 07 December 2022, just before a preprint was posted on medRxiv on 09 December 2022, while the study was completed on 30 April 2020.
On top of that, there are serious issues with the reporting and interpretation of results.
According to the authors an equivalence design was used with the equivalence margin of 2 points in the SODA score. Nine comparisons of SODA scores in the curcumin plus omeprazole (C+O), curcumin only (C), and omeprazole only (O) groups were reported. For three of those confidence intervals include equivalence margin. The only available interpretation here is that the trial failed to demonstrate equivalence. To demonstrate equivalence the confidence intervals should be between the two equivalence margins rath...
Show MoreDear Prof. Franco,
I am writing to request further clarification on the paper “Likelihood ratio interpretation of the relative risk”. The “key messages” section of this paper states that the study adds the following to the literature:
⇒ It is demonstrated that the conventional interpretation of the relative risk is in conflict with Bayes’ theorem.
⇒ The interpretation of the relative risk as a likelihood ratio connecting prior (unconditional) intervention risk to outcome conditional intervention risk is required to avoid conflict with Bayes’ Theorem
I will refer to the first bullet point as “Doi’s Conjecture”. Doi’s Conjecture is also stated in the second section of the main text, where it is claimed that “the usual interpretation (33% increase in the +ve outcome under treatment) contravenes Bayes Theorem”.
No attempt is made within the text to prove Doi’s Conjecture. But perhaps more worryingly, no attempt is made to define the term “interpretation”, a term which is not defined in standard probability theory. The meaning of Doi’s Conjecture is therefore at best ambiguous. Moreover, the manuscript relies substantially on claims about how effect measures are “perceived”, another term which is defined neither in probability theory not in the manuscript.
The relative risk is defined as the risk of the outcome under treatment, divided by the risk of the outcome under the control condition; that is, as a ratio of two probabilities. Thi...
Show MoreTitle: “Claims about the main claim”
Author: Suhail A, Doi, Polychronis, Kostoulas, Paul, Glasziou
In response to the published article "Likelihood ratio interpretation of the relative risk"
Rapid response :
September 16, 2022
The problem in evidence-based medicine arises when we port relative risks derived from one study to settings with different baseline risks. For example, a baseline risk of 0.2 and treated risk of 0.4 for an event in a trial gives a RR of 2 (0.4/0.2) and the complementary cRR of 0.75 (0.6/0.8). Thus the ratio of LRs (RR/cRR) is 2/0.75 = 2.67. If applied to a baseline risk of 0.5 the predicted risk under treatment with the RR “interpretation” is 1.0 but with the ratio of LRs “interpretation” is 0.73. Here, the interpretation of the risk ratio as a likelihood ratio, using Bayes’ theorem, clearly gives different results, and solves the problem of impossible risks as clearly depicted in the manuscript and the example.
If, in our effort to highlight the need of this correct interpretation, we have used strong wording that annoyed the commentator we feel the need to express regret. We hope that the commentator could also feel similarly for his scientifically unbecoming choice of wording that culminated with “Doi’s Conjecture”.
Conflict of Interest
None declared
I would like to congratulate Dr. Abbott and her team in generating this piece of important and interesting article, which applied the methods of meta-science to the early systematic review articles and the infodemics related to COVID-19.
Indeed, the COVID-19 pandemic came quick and ferocious, starting early 2020 and lasted till recently and with new possible variants emerging, it still presents the medical community and indeed the scientific circles with challenging question. Thankfully to the selfless work of researchers, patients and frontline medical staffs, we now have some valuable means to deal with this Pandemic.
The research community was presented with a rather challenging task of designing and conducting researches to answer important questions relating to the new infectious diseases at the time of early 2020. The “new” corona virus was ravaging parts of our world without checks. So studies were conducted at pace, which unfortunately resulted in much duplicated and poor methodological studies conducted. But on the other hand, the sheer volume of studies itself may be useful as it generated evidence to inform us of what does and what does not work in terms combating the COVID-19. For example, dexamethasone (RECOVERY trial) was found to be essential for severe COVID-19 patients and the use of Hydroxychloroquine is ineffective for COVID-19.
Show MoreHaving said this, I must state that I am not in support of the generation of poor quality clinical studi...
Dear Editor,
This response is in relation to the titled article above published in June 2019. Firstly, I would like to commend the outstanding work of research done. While reading the article, I understood the correlation between the nursing field, evidence-based research, and ways in which patients benefit from current health practices. Furthermore, the research conducted a wide range of research benefits in other nursing career paths globally. It showed experts views on teaching evidenced based prospectus, evidence-based deliberations, and stakeholders’ engagement which can impact patients involved. I agree with the study conducted and how research is essential for future advancements as well as improvements in care to patients. Unfortunately, there aren’t as much published research work in The Bahamas on evidence-based practices from an expert view. Through further research this thesis can become widespread to obtain more views on this pressing matter.
We fully agree that „non-publication of trial results and selective outcome reporting…is not a phenomenon that is limited to homeopathy.”
Previous reviews in conventional medicine, such as the study by Kosa et al. in 2018, report „…substantive disagreement in reporting between publications and current clinical trial registry, which were associated with several study characteristics”.[1]
In 2019 The Lancet commented on the reporting of clinical trial data for 30 European universities that sponsor the largest number of trials governed by EU clinical trials regulation: “The report shows that 778 (83%) of 940 clinical trials sponsored by these universities due to post their results on the EU Clinical trials Register (EudraCT) had not done so”.[2]
The International Committee of Medical Journal Editors (ICMJE) announced in 2005 that “… trials that begin enrolment of patients after 1 July 2005 must register in a public trials registry at or before the onset of enrolment to be considered for publication …”.[3] EU rules took effect in 2014, which require all clinical trials registered in EudraCT to post summary results within 12 months of study completion.[2] Hence, the inclusion of studies on homeopathy published before and in 2005 by Gartlehner et al. 2022 does not seem reasonable respectively of those published before and in 2014 is debatable.
Show MoreNotwithstanding the above, precise information on sub-groups of studies was not given by Gartlehner et al. 202...
In their recent paper, Gartlehner et al [1] reached the headline conclusion that ‘effect estimates of meta-analyses of homeopathy trials might substantially overestimate the true treatment effect of homeopathic remedies’. Their conclusion is based on having re-analysed one of the systematic review papers’ data published by Mathie et al [2] by taking into account the possible impact of a trial’s registration status. Gartlehner et al analysed a sub-set of 19 trials of non-individualised homeopathic treatment, comparing 6 trials that were registered with 13 trials that were not registered. They observed a statistically significant difference between homeopathy and placebo only for the non-registered trials; however, the difference in effect sizes between registered and non-registered trials did not reach statistical significance.
In conducting their re-analysis, Gartlehner et al have failed to recognise that the meta-analysis by Mathie et al [2] was primarily based on a sensitivity analysis of trials that comprised reliable evidence (effectively, low risk of bias): the effect-size estimate collectively for those 3 trials yielded a statistically non-significant result. Those 3 trials are amongst the 6 registered trials in Gartlehner’s re-analysis, and so it is no surprise that they contributed to a non-significant pooled effect size. A majority of the other 13 trials, now defined as non-registered [1], had previously been categorised by Mathie et al as high risk of bias...
Show MorePages