Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
When busy clinicians bump into a new treatment, they ask themselves 2 questions. Firstly, is it better than (“superior to”) what they are using now? Secondly, if it’s not superior, is it as good as what they are using now (“non-inferior”) and preferable for some other reason (eg, fewer side effects or more affordable)? Moreover, they want answers to these questions right away. Evidence-Based Medicine and its related evidence-based journals do their best to answer these questions in their “more informative titles.” That’s why this issue contains titles such as: “Angioplasty at an invasive treatment centre reduced mortality compared with first contact thrombolysis”1 (http://ebm.bmjjournals.com/cgi/content/9/2/42) and “Ximelagatran was non-inferior to warfarin in preventing stroke and systemic embolism in atrial fibrillation.”2 (http://ebm.bmjjournals.com/cgi/content/9/2/43) The latter of these 2 studies prompted this editorial.
Progress toward this “more informative” goal has been slow because we have been prisoners of traditional statistical concepts that call for 2-sided tests of statistical significance and require rejection of the null hypothesis. We have further imprisoned ourselves by misinterpreting “statistically nonsignificant” results of these 2-tailed tests. Rather than recognising such results as “indeterminate” (uncertain), we conclude that they are “negative” (certain, providing proof of no difference between treatments). This editorial will address the problems created by these ways of thinking and, more importantly, their clinically relevant solutions.
At the root of our problem is the “null hypothesis,” which decrees that the difference between a new and standard treatment ought to be zero. …