Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Pragmatic clinical trials (PCTs) often study interventions delivered within the context of standard clinical encounters with the overall goal of producing generalisable knowledge to inform implementation strategies and health policy.1 In reality, however, PCTs have a gradient of pragmatic and explanatory features, as described by the PRagmatic Explanatory Continuum Indicator Summary, 2nd edition (PRECIS-2) framework.2 To facilitate the process of iterative learning, PCTs and comparative effectiveness trials frequently test interventions shown to be effective in explanatory trials, the latter having more stringent entry criteria. PCTs are particularly valuable for assessing use of non-pharmacological interventions, such as those designed to manage pain. Conducted in settings involving a broad range of patients and delivered by a range of qualified clinicians that may or may not have a research background, PCTs can illuminate implementation barriers and practice variations affecting the delivery of clinical interventions that may or may not be widely supported by institutional culture.
Terminology is essential to determine study risk appropriately and to interpret trial results accurately. Ambiguity in the use of terms used to describe (and differentiate) control and experimental interventions surfaces in unexpected ways, driven by context-specific notions of efficacy and trial oversight. Inconsistent use (and thus variable interpretation) of the terms usual care, standard of care, validated care and experimental care by researchers, institutional review boards, clinical staff and patients can create confusion that affects the conduct of PCTs. In particular, capricious use of the term ‘usual care’ used in PCTs challenges the manner in which ethics and regulatory determinations are made. Without a clear understanding of the use of care terminology, the full practical potential of PCTs can be compromised, potentially affecting the scientific integrity of important clinical research activities and limiting the studies’ value. Within PCTs, evaluating non-pharmacological pain management interventions, clinical staff may …
Contributors All authors contributed to the conceptualisation of the paper and contributed substantially to the final draft. All authors approved the final draft. DIR and AFD created the initial draft.
Funding Research reported in this publication was made possible by support from grant number U24 AT009769 from the National Centre for Complementary and Integrative Health (NCCIH) and the Office of Behavioural and Social Sciences Research (OBSSR), in addition to support from the NCCIH UG3/UH3AT009763 and UH3AT009761 cooperative agreements and the Assistant Secretary of Defence for Health Affairs endorsed by the Department of Defence, through the Pain Management Collaboratory – Pragmatic Clinical Trials Demonstration Project Award W81XWH-18-2-0008. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NCCIH, OBSSR and the National Institutes of Health (NIH). This manuscript is a product of the NIH-DOD-VA Pain Management Collaboratory. For more information about the Collaboratory, visit https://painmanagementcollaboratory.org/https://painmanagementcollaboratory.org/.
Disclaimer The view(s) expressed herein are those of the author(s) and do not necessarily reflect the official policy or position of the US Defence Health Agency, Brooke Army Medical Centre, the Uniformed Services University, the Department of Defence, the Department of Veterans Affairs, NCCIH, NIH or the US Government.
Competing interests Work by all authors was supported by grants from either the US National Institutes of Health (NIH), National Center for Complementary and Integrative Health (NCCIH), the Office of Behavioral and Social Sciences Research (OBSSR) or the US Department of Defense. In all cases, funding was provided to institutions and not to individuals. No other competing interests to declare.
Provenance and peer review Not commissioned; internally peer reviewed.