Article Text
Abstract
Objectives Over 30 years ago we demonstrated the poor criterion validity of a popular fitness test, the 20 m shuttle run or ‘bleep’ test (20mSRT). We discounted the test and assumed that others with demonstrable validity and reliability would replace its use in research. Around then, our attention was drawn to an eloquent but obscure paper by JM Tanner (1949) which detailed the fallacy of simple division by body mass to accommodate body size differences in physiological function. Tanner described how incorrect analyses led to patients having ‘no more formidable disease than statistical artefact’. Aware of the significance of this paper for our own field, over the next 15 years we published numerous data and tutorial papers demonstrating appropriate methods to measure and interpret cardiorespiratory fitness (CRF) during growth. Not only is the 20mSRT not a valid estimate of measured CRF, it predicts values expressed in simple ratio with body mass.
Method Despite our efforts, the past 10 years have seen a global explosion in published research studies of children’s CRF anchored in these flawed methodologies. Data from millions of children worldwide have been collated into international norms, used to examine present and predict future cardiovascular and metabolic health, and to identify individual children who warrant intervention to reduce their risk of future cardiovascular disease – the raising of ‘clinical red flags’. Data from these studies’ present patterns of temporal changes in CRF which directly conflict with rigorously collected and appropriately analysed laboratory data. The 20mSRT test is being supported by international movements as a way of monitoring physical activity levels although objective data reveal the two to be unrelated. Moreover, clinical populations of children with serious life-limiting conditions are being put through maximal laboratory exercise testing with conclusions about their health status being made upon an inappropriate statistical analysis.
Results We believe the continued use of these flawed methodologies in vast numbers of children world-wide to be ethically and morally indefensible. By way of response we have, within the past 12 months: submitted 7 original data papers based upon extensive cross- sectional and longitudinal data founded on over 2000 rigorously determined individual assessments – all of which provide details of and recommendations for statistically justified analytical methods; we have submitted 7 editorial/commentary pieces to paediatric medical, sports medicine and physical education journals, and written 2 responses to letters from those entrenched in poor methodologies. Despite our polite, transparent, scientifically-based pleas for ‘constructive, collaborative debate’ we have encountered editorial bias, e.g. turned down without review; turned down despite positive reviews; appealed editorial decisions and been prevented from responding to letters commenting
Conclusions Others have attempted to diminish our contributions by employing in letters a tone of thinly disguised hostility or accusing us of evangelistic fervour whilst failing to justify their own methods. Yes, we are challenging; shifting an entire research culture, which has its roots in university teaching, is not easy – scientific rigour in aspects of our discipline plays second fiddle to practical, convenient, traditional and feasible. Although this is happening on the periphery of mainstream medical research, children’s health matters and as the population becomes increasingly sedentary and overweight we urgently need to develop scientifically rigorous methods to measure and interpret CRF in health and disease. Already a generation of researchers and policy makers has been misinformed and misled by flawed data. Those of us facing these challenges need to work together to develop strategies for shifting research culture back towards defensible science.