Statistics from Altmetric.com
Why is a small study a problem?
When reading an article, we often wonder whether the study was large enough. If a study does not find a statistically significant effect (eg, at p<0.05), it may be because the study was too small or because there actually is no true effect. You should check whether the confidence intervals (CIs) show that the data are consistent with the effect being clinically important, even though the effect was not “statistically significant.”
How can we tell whether the study was too small?
The CI quantifies the error, and thus the uncertainty, associated with the use of study results to draw inferences about wider population values: the upper and lower limits give us the plausible range of population values. If the CI is very wide, then there is little certainty that the study result is a good estimate and the study is likely to have been too small. However, if the CI does not cross the value of clinical significance, then the data are not consistent with a clinically important effect, no matter how large or small the study, how wide or narrow the CI, and how statistically significant the effect. Since studies sometimes do not report a CI, it is helpful to have an approximate idea of the size requirements of different types of studies.
Sample size and confidence intervals
The larger a study, the smaller the random error (quantified by the standard error [SE]), …
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.