Skip to main content

The effectiveness and feasibility of TREAT (Tailoring Research Evidence and Theory) journal clubs in allied health: a randomised controlled trial

Abstract

Background

Journal clubs (JC) may increase clinicians’ evidence-based practice (EBP) skills and facilitate evidence uptake in clinical practice, however there is a lack of research into their effectiveness in allied health. We investigated the effectiveness of a structured JC that is Tailored According to Research Evidence And Theory (TREAT) in improving EBP skills and practice compared to a standard JC format for allied health professionals. Concurrently, we explored the feasibility of implementing TREAT JCs in a healthcare setting, by evaluating participating clinicians’ perceptions and satisfaction.

Methods

We conducted an explanatory mixed methods study involving a cluster randomised controlled trial with a nested focus group for the intervention participants. Nine JCs with 126 allied health participants were randomly allocated to receive either the TREAT or standard JC format for 1 h/month for 6 months. We conducted pre-post measures of EBP skills and attitudes using the EBP questionnaire and Assessing Competence in Evidence-Based Medicine tool and a tailored satisfaction and practice change questionnaire. Post-intervention, we also conducted a focus group with TREAT participants to explore their perceptions of the format.

Results

There were no significant differences between JC formats in EBP skills, knowledge or attitudes or influence on clinical practice, with participants maintaining intermediate level skills across time points. Participants reported significantly greater satisfaction with the organisation of the TREAT format. Participants in both groups reported positive changes to clinical practice. Perceived outcomes to the TREAT format and facilitating mechanisms were identified including the use of an academic facilitator, group appraisal approach and consistent appraisal tools which assisted skill development and engagement.

Conclusions

It is feasible to implement an evidence-based JC for allied health clinicians. While clinicians were more satisfied with the TREAT format, it did not significantly improve their EBP skills, attitudes, knowledge and/or practice, when compared to the standard format. The use of an academic facilitator, group based critical appraisal, and the consistent use of appraisal tools were perceived as useful components of the JC format. A structured JC may maintain EBP skills in allied health clinicians and facilitate engagement, however additional training may be required to further enhance EBP skills.

Trial registration

ACTRN12616000811404 Retrospectively registered 21 June 2016.

Peer Review reports

Background

Clinical practice informed by research evidence is an important pre-requisite for health professionals to deliver quality patient outcomes [1, 2]. Allied health professionals, who make up the third largest healthcare clinical workforce, commonly report barriers to providing evidence based practice (EBP) including knowledge and confidence gaps, and lack of time [3,4,5]. Journal clubs (JC) describe a group of individuals who meet regularly to critique and discuss research articles, and are recognised as a tool to increase clinicians’ knowledge and use of research evidence in clinical practice [3, 6,7,8]. However, there is a lack of research exploring the effectiveness of JCs by allied health clinicians working in healthcare services.

Recent systematic reviews indicate that the majority of research exploring the effectiveness of JCs involves medical professionals [6, 9]. While heterogeneity among studies restricted meta analyses in these reviews, previous randomised controlled trials have demonstrated increased self-reported knowledge in medical interns [10], and objective critical appraisal skills in surgeons [11] who participated in a JC compared to control groups. Most research about the effectiveness of JCs within allied health includes case-based studies or small uncontrolled study designs [3, 7, 12,13,14]. The largest study evaluating the effectiveness of JCs in allied health, recruited 93 clinicians across five professional groups (physiotherapy, speech pathology, nutrition, occupational therapy and social work) to participate in a structured journal club model based on principles of adult learning and a collaborative approach between researchers and clinicians [7]. Following the six-month trial, the results suggested significant improvements in objective and self-reported measures of EBP knowledge, as measured by the Adapted Fresno Test and EBP uptake scale, with some professional groups also reporting increased evidence uptake and improved attitudes towards EBP [7]. While the uncontrolled study design was acknowledged as a limitation, other researchers have recognised that randomised controlled designs can be challenging to implement in educational and translational research [15]. Further, qualitative methodologies have beeen recommended to supplement randomised study designs, to assist with explaining results [9]. It is also important that future research evaluates JCs that are grounded in existing research and theory, and the goals of any intervention are tailored to the specific organisational and/or professional context [7, 9, 15].

Local context

JCs are currently the most frequently used intervention to improve allied health clinicians’ knowledge and skills in EBP within Gold Coast Health (GCH), Australia. A group of local EBP champions consisting of approximately 12 GCH allied health clinicians conducted a quality improvement project with the guidance of authors RW and SM to evaluate local JC processes. The project identified substantial variation in both the implementation of JCs and their impact on individuals’ knowledge and use of research evidence in clinical services. In response, we synthesised the research evidence for effective implementation of JCs from two recent systematic reviews of 12 and 18 studies [6, 9]. This synthesis revealed 11 “key components” for effective JC implementation including having an overarching goal or purpose, support from researchers, a facilitator to guide discussion, adhering to principles of adult and multi-faceted learning, and evaluating knowledge uptake [6, 9]. An audit of existing JCs identified that most allied health JCs within GCH incorporated relatively few of these key components as a routine part of their JC sessions [6, 9].

Aim

We aimed to evaluate the effectiveness and feasibility of implementing a JC, informed by the best research evidence and theory for allied health clinicians [6, 9]. Eleven key components identified in the research evidence were incorporated into a structured JC called “TREAT” (Tailoring Research Evidence and Theory), as shown in Table 1. We evaluated the effectiveness of the TREAT JC format on improving clinicians’ EBP skills, knowledge, attitudes and practice compared with the standard JC format for allied health professionals. We also sought clinicians’ satisfaction and perceptions of the TREAT JC format to explore feasibility of implementation.

Table 1 Components of TREAT journal club format

Methods

Study design

We conducted an explanatory sequential mixed methods project to address both implementation and intervention objectives. A cluster randomised controlled trial was used to investigate changes in allied health clinicians’ EBP skills, knowledge, attitudes and practice and a nested focus group captured clinicians’ perceptions of the TREAT JC format. Ethical approval for the study was provided (HREC/15/QGC/310) prior to commencement. The trial was also retrospectively registered 21 June 2016 (ACTRN12616000811404).

Participants

After seeking approval from line managers, we invited all 13 existing allied health JCs within GCH to participate. Nine JCs agreed to participate and were randomised to receive either the TREAT JC format or to continue with their standard JC format (see Fig. 1). Written informed consent was obtained from all participants. Authors identified the JCs as “small” (6–15 attendees) or “large” (> 16 attendees) and this difference in size was used to stratify randomisation. An independent researcher used a computer generated random block design to allocate JCs (in each group) to TREAT or Standard. Allocations were concealed in opaque envelopes numbered sequentially (1 to 5 “small” and 1 to 5 “large”). These envelopes were opened by the JC participants of each group after their pre-assessment sessions. To maximise variation of professional experience and JC attendance, we purposively sampled 61 participants from the TREAT format to invite 18 clinicians to participate in focus groups. Eight clinicians consented to attend from four different TREAT JCs.

Fig. 1
figure 1

Participant flow through study

Intervention

All JCs were encouraged to continue meeting for their one-hour, monthly JCs, at their nominated time and venue for the six-month intervention period. Five JCs were randomly allocated to the TREAT format, which incorporated eleven key components of successful journal clubs (see Table 1). These key components were tailored in consultation with a group of allied health EBP champions across GCH to optimise operational feasibility. The TREAT JCs were facilitated by academic allied health researchers experienced in teaching and using EBP (RT, 20/30 JCs, RW 6/30 and SM 4/30 JCs). Each JC followed a consistent format (see Additional file 1) including: initial goal setting to identify relevant topics; use of the PICO approach to clarify clinical questions, group critical appraisal using structured “Critical Appraisal Skills Programme” [16] tools; engaging librarian support; and formally documenting actions. Clinicians allocated to the standard group were asked to continue using their JC format as they had previously done for the duration of the trial. In contrast to the TREAT format, this generally consisted of a clinician choosing an article of interest, appraising it themselves (with or without the use of a formal appraisal tool) and presenting it to the rest of the group without any formal facilitation or follow up. Clinicians in both TREAT and standard JCs completed an assessment of their EBP knowledge and skills at baseline and immediately after the six-month JC intervention. Adherence and adaptation [17] to the TREAT format were monitored monthly by the research team. Due to the nature of the intervention, blinding of participants was not possible however the use of both objective and subjective outcome measures helped to reduce any bias associated with known treatment allocation.

Materials

The evidence based practice questionnaire (EBPQ)

The EBPQ is a 24-item, self-report questionnaire used to assess an individual’s practice, attitudes towards and knowledge of EBP [18]. Responses to items are recorded on a seven point Likert scale with higher scores indicating a more positive attitude towards EBP [18]. The practice behaviour subscale asks respondents how frequently they practice the five phases of EBP and how frequently they shared the EBP information with colleagues. EBP attitudes are measured with four statements where respondents are asked to nominate which statement is most like them. Finally, self-reported EBP knowledge is assessed by 14 items where respondents are asked to nominate how they would rate themselves on key tasks. There are no reported cut offs for this measure. In a nursing population, the EBPQ has been found to have high internal consistency (α = 0.87) with subscale internal consistency ranging from α = 0.79 to α = 0.91 [18]. Moderate yet positive construct validity of this questionnaire has been reported, with correlation coefficients ranging between 0.3 and 0.4 [18].

Assessing competence in evidence-based practice (ACE tool)

The ACE tool is a 15-item measure of applied EBP skills to a hypothetical clinical scenario, literature search and results [19]. Items are responded to dichotomously, and are grouped into four of the five steps in evidence-based practice [19]. Three medical trainee cohorts were used to measure construct validity and resulted in statistically significant trends corresponding to level of training. ACE Total score for EBM (evidence based medicine)- novice was 8.6 ± 2.4, EBM-intermediate was 9.5 ± 1.8 and EBM advanced 10.4 ± 2.2. Cronbach’s alpha for internal consistency was 0.69 [19].

After the JC intervention, participants completed a purpose-designed self-report questionnaire, which explored their satisfaction with the JC and how their clinical practice was influenced after participating in each JC session (see Additional file 2). For the influence on clinical practice component of the questionnaire, clinicians were asked to nominate whether they attended each JC and whether the topic was of clinical relevance to their practice. For those JCs attended and relevant, clinicians were then asked to rate (five-point Likert scale) how strongly they agreed that the article discussed either changed or confirmed their current practice. The higher rating on either Likert scale (changed or confirmed) was used for analyses. Where a change in practice was reported, clinicians were asked to identify the type of change (e.g., adopting new guideline, treatment strategy).

Clinician focus group

Following the JC intervention, focus groups with participants from the TREAT JC were conducted to gain a deeper insight into the clinician’s experiences. Themes from the purpose designed clinician satisfaction questionnaire were used to inform probing questions for the focus group. Two 1-h focus groups were conducted by independent facilitators using a semi-structured interview guide (Additional file 3).

Data analyses

Randomisation occurred at the group level and data analyses were performed at the individual level. Being a pilot research project, we recruited a convenience sample based on the existing number of journal clubs within the recruitment site and an estimated 80% consent rate.

Quantitative analysis

Between group differences for the EBPQ and ACE tool were analysed using mixed effects models with random effects being JC cluster and individual participant and fixed effects including group*prepost interaction to test the effect of TREAT versus standard JCs over time. Covariates in the model included attendance, age, gender, profession, possession of a research higher degree, and level of clinical experience. JC cluster was found to contribute less than 10% to the total variation in all cases so the final mixed effects analyses used individual as the only random effect. For outcome measures that were only administered post intervention (i.e., satisfaction questionnaire and influence on practice ratings), simple least squares regression or logistic regression were used as required with group as the variable of interest. As the impact of covariates was not an aim of the study, significant results (p = 0.05) only will be reported. To explore specific skills of the ACE tool more closely, item level responses to the ACE tool questions were also analysed descriptively. Dummy codes were allocated to evaluate changes between correct and incorrect answers before and after the intervention. All quantitative analyses were conducted using a per protocol analysis with no imputation of missing data.

Qualitative analysis

Data were gathered from open ended questions within the satisfaction questionnaires and focus group transcripts. Two levels of analyses were undertaken which included a thematic description of the data and then an interpretative thematic analysis looking for latent themes [20]. For the first level of analyses, we used content analysis to develop initial categories from the questionnaires. Focus group transcripts were then coded by one of the authors (RW), with formation of categories and sub-categories being based on semantic meanings of the data with discussion and checking with SM. A second level of thematic analyses and synthesis was then conducted by RW to identify explanatory themes from the data of participants taking part in the TREAT format which were checked and discussed with SM. In this, inductive coding was used to help examine the underlying ideas, assumptions and conceptualisations in the data [20].

Mixed methods interpretation

Both quantitative and qualitative data were analysed independently. They were brought together for interpretation after these analyses.

Results

Nine journal clubs participated in the trial. Pre-assessment surveys were completed by 126 clinicians and 80 (~ 64%) completed both pre- and post-assessment questionnaires. Reasons for loss at follow up are provided in Fig. 1 and included participants moving to another work role (n = 15) or being on extended leave (n = 5) or no reason given (n = 25) (i.e., participants not responding to contact made by researchers). No significant differences on baseline total scores of the ACE tool or EBPQ, age, gender, practice setting, and years of clinical experience were found between those who did and did not complete the post assessment. There was a significant difference between the groups for profession (p = 0.03) and proportion of people completing higher degrees (p = 0.048), with about 10% more participants who completed a research higher degree in the drop out group, and less speech pathologists, physiotherapists and more pharmacists in the drop out group.

There were no differences between the two groups regarding demographic variables or pre-assessment measures (see Tables 2 and 3). The majority of participants were female (106/125 85%), aged between 20 and 29 years (47/125, 38%) or 30 and 39 years (45/125, 36%), and with just under half the participants having between 2 and 5 years (29/125, 23%) or 5 and 10 years (30/125, 24%) clinical experience. Attendance at the six journal club sessions ranged from 0 to 6 sessions with an average attendance of 3.8 sessions. Participants in the TREAT focus groups ranged in their clinical experience and included base grade clinicians (n = 3), senior clinicians (n = 4) and one team leader with representation from four out of the five journal clubs across four professions (as invited participants from JC 1 participants were unavailable). Topics discussed at each journal club are presented in Additional file 4. Due to unforeseen service changes, one journal club in the standard group only had four JC meetings across the six months.

Table 2 Participant Demographic Information
Table 3 Comparison between TREAT and Standard Journal Club pre- to post-intervention

Some adaptations to the original TREAT JC format were made during the trial. Due to time constraints in the JC session, the use of didactic teaching was removed from the TREAT format after the first week and converted into a paper-based and electronic resource. While clinicians were invited to bring food along to their JC sessions, this did not consistently occur.

EBP practice, attitudes and knowledge (EBPQ)

Overall, allied health clinicians scored slightly above the mid-point (TREAT M = 26.5, Standard M = 26.9 out of a possible 42) for self-reported EBP practice at pre-assessment and scores increased slightly at post-assessment on the EBPQ (Table 3). However, there was no difference between TREAT and Standard JC participants on EBP practice of the EBPQ at post-assessment. EBP attitudes were very positive at pre-assessment (TREAT M = 21.7, Standard M = 21.9 out of a possible 28) and remained so at post-assessment and there was no statistically significant difference between groups. Participant’s self-reported EBP knowledge was slightly above average at pre-assessment (TREAT JC M = 63.8, Standard JC M = 66.2 out of a possible 119) and there was no difference between groups at post-assessment. The mixed effect models revealed that profession type (p = < 0.05) gender (p = 0.038), clinical experience (p = 0.05) and whether the participant had a research higher degree (p = 0.003) significantly influenced certain items of the participant’s EBPQ ratings. There was no effect of group however as indicated by a non-significant interaction of group with time.

EBP skills (ACE tool)

There were no significant between group differences in skills-based assessment of EBP (see Additional file 5 for individual item descriptive analyses for the ACE tool). Clinicians in both groups scored in the intermediate level at both pre- and post-assessment. There were no significant differences between groups on the four key steps of EBP before and after the intervention. While some of the variables including age, clinical experience and profession were significant predictors for certain items of the ACE tool (p = < 0.05), there was no significant interaction between group or time to indicate any effect of the TREAT JC compared to the standard JC.

Satisfaction

All participants rated the JCs highly for overall satisfaction, usefulness and value (Table 4). However, the TREAT JC participants were significantly more satisfied with the organisation of the journal club, when compared with their standard JC participants. All participants would recommend journal clubs to other clinicians. When the mixed model accounted for clinical experience, there was also a significant difference between groups for the item pertaining to whether the JC should continue (p = 0.007), with the TREAT JC being rated more favourably. There was a significant effect of research higher degree type on perceptions of the JC being valuable (p = 0.024) however this did not have an effect on detecting any differences between the two groups.

Table 4 Post-Intervention comparison of satisfaction

Influence on clinical practice

Participants in both groups rated the JC as positively influencing their clinical practice across the sessions, with scores averaging above 3.5 out of 5 (Table 5). No significant difference between groups was found. The most frequently reported clinical practice change was updating a guideline or pathway and adopting new therapy strategies (Additional file 6). Fewer clinicians reported stopping therapy or initiating research or quality activity as a result of the JC session. A significant difference in the frequency of clinical practice changes was identified between groups for session 6, where significantly more clinicians reported adopting a new treatment strategy in the standard group compared to the TREAT group (p = 0.005). As this difference was likely impacted by a number of factors external to the JC format including the evidence being appraised for that week and how this was conducted, we cannot interpret this finding. No other significant differences between groups were identified.

Table 5 Clinician ratings of influence of journal club session on clinical practice

Qualitative categories and themes

Initial descriptive analyses of the open-ended questionnaire responses (both groups) and focus group data (TREAT group only) revealed four main categories that reflected the feasibility of this intervention: 1) outcomes of the JC, 2) facilitating mechanisms 3) challenges and 4) suggestions for improvement. Within these themes, some differences were identified between the TREAT and Standard JC as shown in Table 6. For example, TREAT JC participants reported some different individual outcomes compared to participants in the Standard JC. This included an increased knowledge of and confidence in critical appraisal and changes to clinical practice including developing a new pathway of clinical care, “we got a new pathway and we were able to do a quality activity and write up and circulate it with our colleagues…so that was really great” [F2]. The TREAT format was also reported to result in greater time efficiency, “so it[appraisal] was done as a group….where as previously the [presenter] would have … pre-prepared the appraisal so I think it was more time efficient” [F1].

Table 6 Summary of qualitative themes from questionnaire and focus group

To further explore the feasibility of the TREAT JC from the clinician’s perspective, a second level of thematic analysis was undertaken. This analysis revealed two explanatory themes relating to the feasibility of the format which pertained to 1) skill development, and 2) engagement, as shown in Fig. 2 together with subthemes. Within the theme of skill development, clinicians reported access to expertise and tools was important. This included having access to the facilitator as part of the TREAT format, “what everyone really appreciated was having an expert in the room to ask” [F1]. Several attributes of the academic facilitator were described to contribute to the clinicians’ perceived value of the academic role in the TREAT format, including having a “specific skill set… to impart that expertise to the person who’s asking the questions” [F2], and “keeping the flow and discussions happening” [F1]. Other components reported to be useful for skill development included, “access to library” [F2], incidental education from the facilitators [F2] and use of CASP appraisal tools [Q3] to allow “practice to analyse articles with more detail” [Q9]. Extra education was however also suggested by clinicians which included, “basic training… on how to interpret research articles, plot charts, p values etc.” [Q5], and some kind of ongoing support or “check-ups” [Q1] with an academic facilitator. Within the theme of skill development, clinicians also reported using increased critical thinking, including analysing articles more robustly, “we’re talking about things that we didn’t before. So we’re analysing articles in a more robust way” [F1], and an increased desire and self-efficacy to use appraisal, “it has given me the ability and desire to read, analyse, discuss and apply more” [Q2] and “I think by going through TREAT ….I’m much more critical and nit-picky… it shows you to really tear the article apart” [F1]. Clinicians in the TREAT JC also felt the hands on practice of EBP skills within the JC session increased their utilisation of these skills, “For me personally, I just got to utilise those skills a lot more” and resulted in increased confidence to apply the skills outside of JC, “I think just the actual practising of skills then made me confident to look at my own area” [F1].

Fig. 2
figure 2

Thematic analysis of TREAT qualitative data

The second theme found in the data was related to engagement and included subthemes of informal group collaboration, accountability and attendance, and relevance to clinical practice. The collaborative group based appraisal were seen to promote engagement and be less intimidating, with clinicians reporting “it was more collaborative and I guess less stressful for the person that’s actually brought that article to the group” [F2], with another clinician commenting, “everyone seemed to (have) participated a lot more during TREAT than before or after” [F1]. The prioritisation of topics as a group also facilitated engagement, “the goal setting at the start of the six months just helped make sure we had something that everyone was interested in”. [F1] Having everyone read articles beforehand was seen as a way to further enhance group engagement in the future [F1]. Within the subtheme of relevance to clinical practice, engagement was further promoted by “choosing topics relevant to clinical practice” [Q8], however this was a challenge at times for JCs who had members with diverse professions or interest areas attending [Q5], or where there was a lack of evidence on a certain topic “..when you get to the end of it [article] and it’s not great and you ask the question are we going to implement this? No” [F2]. Ensuring there was enough time to discuss the application of the evidence within the JC was also seen as important to application to practice, “it would have been helpful if we’d spent a little bit more time on that, about how it can be applied in practice” [F1].

A last subtheme of engagement was accountability and attendance. Clinicians reported the structure of the TREAT format encouraged attendance, “because it was a six week [JC] block there was the intent of staff to attend. Whereas sometimes you can look at what’s being presented and choose whether you want to attend or not” [F1]. Making time to come to the JC amongst “competing priorities and clinical caseloads” [F2] was however still considered a challenge to attending, with one TREAT participant reporting, “while everybody recognises it [JC] as really important…the reason people are unable to attend is because something else more important is happening” [F2].

Discussion

This study is the first cluster RCT to evaluate the effectiveness and feasibility of an evidence-based JC intervention for allied health clinicians working in a healthcare setting. We demonstrated that it is feasible to deliver a structured and evidence-based JC with allied health clinicians. Participants in the TREAT JC were significantly more satisfied with the organisation of the JC, compared to the standard JC and participants perceived a number of the components of the TREAT format to be helpful in promoting skill development and engagement in the JC. Even so, quantitative data was unable to demonstrate that the format was more effective in improving individuals’ knowledge and skills in EBP compared with a standard JC. Rather, intermediate level EBP skills were maintained across both groups over time.

While previous studies have reported that JCs can increase clinicians’ knowledge and skills in EBP [3, 6,7,8], other studies which report some discrepancies in results [18, 21] may give insight into the lack of change to quantitative outcome measures in the present study. For example, an absence of quantitative change in EBP competency was reported in a previous study also using the ACE tool [21]. Consistent with our findings, Ilic et al., [21] found a disparity between positive qualitative data supporting an EBP intervention, and a lack of any significant quantitative difference. It was suggested that this lack of quantitative change may have been related to the fact that the majority of items in the ACE tool evaluate “cognitive knowledge rather than direct application in a clinical context” ([21] p 7). It is therefore possible that the ACE tool may not have been sensitive in detecting changes following JC.

Another possible explanation for the non-significant quantitative changes may be related to the length of the JC intervention. While previous JC research has used a similar 6 session format across 6 months [7], the present study’s participants had an average attendance of approximately 4 sessions. As such, it is possible that the intervention was not long enough to demonstrate a significant effect on knowledge and skills despite the qualitative reports of improved confidence. The finding that participants in the TREAT and standard JCs self-rated very positively on attitudes to EBP at pre-assessment may have also reduced the likelihood of significant changes post treatment from being identified on the EBPQ, as also reported by Lizarondo et al. [7].

The inclusion of nested focus groups and qualitative survey questions provided valuable evidence as to the feasibility of this evidence-based intervention in a clinical setting. Clinicians reported improved skills in appraisal and critical thinking as a result of JC participation, which is consistent with previous research [22]. Similarly to other studies, this study supported existing barriers to JC implementation such as heavy clinical workloads, staff changes and reduced skills [14, 22]. Due to many areas within allied health being research emergent, at times the only available evidence to answer a clinical question is of low quality. This lack of high quality evidence appraised within the JC was a novel barrier reported by allied health clinicians participating in journal clubs.

Implications for practice

We identified evidence-based components of JCs that may enhance clinicians’ satisfaction. In particular, the use of an academic facilitator was seen by clinicians as both a source of expert knowledge and useful for keeping the flow of discussions in the JC. The collaborative group appraisal approach was perceived as valuable in promoting more active participation and reducing the time and responsibility of the presenting clinician. Seeking librarians’ help with searching, shared initial topic selection, and using structured critical appraisal tools were also seen as beneficial strategies. These may be important components for the practical implementation of JCs in other contexts. Qualitative findings support the feasibility of implementing JCs to enhance clinician engagement and maintain clinicians’ intermediate knowledge and skills in EBP, however, clinicians may benefit from additional training and strategies to enhance EBP skills and apply robust research findings to clinical practice.

Limitations and implications for research

The current practice of rotating allied health staff throughout the hospital reduced the number of participants who were consistently able to attend JCs and complete both pre- and post-assessments. Even so, post hoc power calculations revealed a sample of 40 per group would still be able to detect a statistically significant difference of 6.1 points on the EBPQ which would be considered a minimum meaningful difference. It is therefore unlikely that the drop out led to insufficient statistical power. While the TREAT format and standard format were mostly different, some of the aspects of the TREAT format may have been used in the standard group (i.e., consistent time of meeting, discussion of application of the evidence). This overlap of components may have consequently made it more difficult to detect differences between groups.

Similar pragmatic trials, if conducted in other health settings, could help deepen our understanding of the effectiveness and feasibility of JCs and may contribute to a future meta-analysis. Further understanding is also required of the comparative contributions of each of these key components to JCs. It will also be important to ascertain whether current self-reported questionnaires and objective measures of EBP are sufficiently sensitive to detect change in the clinical application of research evidence. Future research may wish to explore the impact of the TREAT format compared to TREAT supplemented with further EBP education, or with other active EBP interventions potentially over a longer intervention period with longer follow up of the application of EBP skills over time. Reliable methods for measuring changes to clinical practice arising from journal club participation are also indicated.

Conclusions

We demonstrated that it is feasible to implement an evidence-based structured JC for allied health clinicians that maintains their positive attitudes and intermediate EBP knowledge and skills. Participants in both groups reported a positive influence on their clinical practice. While EBP knowledge, skills, attitudes and practice did not improve in the structured TREAT journal club, compared to a standard journal club, participants were significantly more satisfied with the organisation of the TREAT JC. Qualitative findings supported the evidence base for effective JCs in allied health to promote skill development and engagement and it is suggested that JCs include the use of an academic facilitator, collaborative group based critical appraisal, and structured appraisal tools. While a structured JC may maintain EBP skills in allied health clinicians and facilitate engagement, additional training may be required to further enhance EBP skills and subsequent utilisation of evidence into clinical practice.

Abbreviations

ACE tool:

Assessing Competence in EBM tool

EBP:

Evidence based practice

GCH:

Gold Coast Health

JC:

Journal club

TREAT:

Tailoring Research Evidence and Theory

References

  1. Hoffmann T, Bennett S, Del Mar C. Evidence-based practice across the health professions. 2nd ed. Churchill: Elsevier; 2013.

  2. Straus S, Richandson W, Glasziou P. Evidence-based medicine: how to practice and teach EBM. Edinburgh: Elsevier Churchill Livingstone; 2011.

    Google Scholar 

  3. Lizarondo L, Grimmer-Somers K, Kumar S. A systematic review of the individual determinants of research evidence use in allied health. J Multidiscip Healthc. 2011;4:261–72.

    Article  Google Scholar 

  4. Caldwell E, Whitehead M, Fleming J, Moes L. Evidence-based practice in everyday clinical practice: strategies for change in a tertiary occupational therapy department. Aust Occup Ther J. 2008;55(2):79–84.

    Article  Google Scholar 

  5. Heiwe S, Kajermo KN, Tyni-Lenné R, Guidetti S, Samuelsson M, Andersson IL, Wengström Y. Evidence-based practice: attitudes, knowledge and behaviour among allied health care professionals. Int J Qual Health Care. 2011;23(2):198–209.

    Article  Google Scholar 

  6. Deenadayalan Y, Grimmer-Somers K, Prior M, Kumar S. How to run an effective journal club: a systematic review. J Eval Clin Pract. 2008;14(5):898–911.

    Article  Google Scholar 

  7. Lizarondo L, Grimmer-Somers K, Kumar S, Crockett A. Does journal club membership improve research evidence uptake in different allied health disciplines: a pre-post study. BMC Res Notes. 2012;5:588.

    Article  Google Scholar 

  8. Honey CP, Baker JA. Exploring the impact of journal clubs: a systematic review. Nurse Educ Today. 2011;31(8):825–31.

    Article  Google Scholar 

  9. Harris J, Kearley K, Heneghan C, Meats E, Roberts N, Perera R, Kearley-Shiers K. Are journal clubs effective in supporting evidence-based decision making? A systematic review. BEME guide no. 16. Med Teach. 2011;33(1):9–23.

    Article  Google Scholar 

  10. Linzer M, Brown JT, Frazier LM, DeLong ER, Siegel WC. Impact of a medical journal club on house-staff reading habits, knowledge, and critical appraisal skills. A randomized control trial. JAMA. 1988;260(17):2537–41.

    Article  Google Scholar 

  11. Macrae HM, Regehr G, McKenzie M, Henteleff H, Taylor M, Barkun J, Fitzgerald GW, Hill A, Richard C, Webber EM, et al. Teaching practicing surgeons critical appraisal skills with an internet-based journal club: a randomized, controlled trial. Surgery. 2004;136(3):641–6.

    Article  Google Scholar 

  12. Lizarondo LM, Kumar S, Grimmer-Somers K. Supporting allied health practitioners in evidence-based practice: a case report... Including commentary by Goodfellow LM. International Journal of Therapy & Rehabilitation. 2009;16(4):226–36.

    Article  Google Scholar 

  13. Milinkovic D, Field N, Agustin CB. Evaluation of a journal club designed to enhance the professional development of radiation therapists. Radiography. 2008;14(2):120–7.

    Article  Google Scholar 

  14. McQueen J, Miller C, Nivison C, Husband V. An investigation into the use of a journal club for evidence-based practice... Including commentary by Dobrzanska L and Kanthraj GR. International Journal of Therapy & Rehabilitation. 2006;13(7):311–7.

    Article  Google Scholar 

  15. Reed D, Price EG, Windish DM, Wright SM, Gozu A, Hsu EB, Beach MC, Kern D, Bass EB. Challenges in systematic reviews of educational intervention studies. Ann Intern Med. 2005;142(12_Part_2):1080–9.

    Article  Google Scholar 

  16. Critical Appraisal Skills Programme (CASP). http://www.casp-uk.net/checklists. Last retreived 4th December 2017.

  17. Pinnock H, Epiphaniou E, Sheikh A, Griffiths C, Eldridge S, Craig P, Taylor SJ. Developing standards for reporting implementation studies of complex interventions (StaRI): a systematic review and e-Delphi. Implement Sci. 2015;10(1):42.

    Article  Google Scholar 

  18. Upton D, Upton P. Development of an evidence-based practice questionnaire for nurses. J Adv Nurs. 2006;53:454–8.

    Article  Google Scholar 

  19. Ilic D, Bin Norden R, Glasziou P, Tilson J, Vaillaneuva E. Development and validation of the ACE tool: assessing medical trainees competencey in evidence based medicine. BMC Med Educ. 2014;14:114.

    Article  Google Scholar 

  20. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77.

  21. Ilic D, Nordin RB, Glasziou P, Tilson JK, Villanueva E. A randomised controlled trial of a blended learning education intervention for teaching evidence-based medicine. BMC Med Educ. 2015;15(1):39.

    Article  Google Scholar 

  22. Lizarondo L, Grimmer-Somers K, Kumar S. Exploring the perspectives of allied health practitioners toward the use of journal clubs as a medium for promoting evidence-based practice: a qualitative study. BMC Med Educ. 2011;11:66.

    Article  Google Scholar 

Download references

Acknowledgements

The authors wish to sincerely thank all clinicians for their time and commitment in participating in this research. Acknowledgements also to the allied health EBP champions and Gold Coast Health librarians for their support and contribution to the project.

Funding

No competitive funding was obtained for the completion of this research.

Availability of data and materials

The datasets supporting the conclusions of this article are available upon request from the contact author.

Author information

Authors and Affiliations

Authors

Contributions

RW, RT and SM were involved in the conceptualisation and design of this research. RW led the ethics submission and participant recruitment, and with RT conducted the pre and post assessment data collection. SM and RW undertook the qualitative data analyses. RT undertook initial quantitative data analyses while IH further prepared data and completed further mixed effect models. All authors were involved in drafting the write up of the manuscript, with RW completing the final preparation and submission of the manuscript. All authors read and approved the final version of the manuscript.

Corresponding author

Correspondence to Rachel J. Wenke.

Ethics declarations

Ethics approval and consent to participate

Ethical approval for the study was provided by the Office of Gold Coast Hospital Human Research Ethics Committee (HREC/15/QGC/310). Written informed consent was obtained from all participants.

Competing interests

The authors do not have any competing interests to declare.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:

This table outlines the session structure of the TREAT journal club format. (DOC 57 kb)

Additional file 2:

This file shows a copy of the original questionnaire given to participants to measure their satisfaction with the journal club they participated it and its influence on their clinical practice. (DOC 44 kb)

Additional file 3:

This file includes the interview guide used in the post treatment focus group for the TREAT participants. (DOC 24 kb)

Additional file 4:

This table outlines each of the topics that were discussed in each of the journal club sessions. (DOC 40 kb)

Additional file 5:

This table provides the frequency of individual item responses from the ACE tool both pre and post intervention. (DOC 60 kb)

Additional file 6:

This table describes the reported frequency of changes to clinical practice following journal club reported by clinicians. (DOC 41 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wenke, R.J., Thomas, R., Hughes, I. et al. The effectiveness and feasibility of TREAT (Tailoring Research Evidence and Theory) journal clubs in allied health: a randomised controlled trial. BMC Med Educ 18, 104 (2018). https://doi.org/10.1186/s12909-018-1198-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12909-018-1198-y

Keywords