Objectives The importance of teaching the skills and practice of Evidence-Based Medicine (EBM) for medical students has steadily grown in recent years. Alongside this growth is a need to evaluate the effectiveness of EBM curriculum as assessed by competency in the five ‘A’s’: Asking, Acquiring, Appraising, Applying and Assessing (impact). A longitudinal, competency based, clinically integrated EBM theme, with assessments has been designed and implemented in the University of Buckingham Medical School (UBMS). The EBM curriculum is progressive with students taught to ask, acquire and appraise evidence in years one and two. In years three, students are asked to apply EBM in clinical practice and reflect on their experience. The aim of this study was to carry out a cross sectional study examining the feasibility of administering the 15-item Assessing Competency in EBM (ACE) tool and compare student performance in the ACE tool across different years of EBM training.
Method While initially testing the feasibility of administering the tool, we used paper-based assessment administered during the EBM teaching session. After successfully completing the feasibility phase, we administered the test through our virtual learning environment. Data was collected on student performances in the paper-based assessment for one cohort (third year students) and from the assessment in our online portal for first and second year students. Performance data in ACE was gathered from a cross-sectional sample of 212 medical students representing, first year, second year and third year cohorts. Total ACE scores, item discrimination and internal reliability were analysed.
Results Performance data from 212 students (83 first years, 83 second years and 46 third years) was compared via one-way ANOVA. No significant difference in means scores was observed across the years (mean scores 10.4, 10.22, 10.28). Individual item discrimination was good except for one item (item discrimination index ranging from 0.27-0.93), overall test reliability was 0.60, with internal reliability consistent across most items (item total correlations were all positive ranging from 0.14-0.60).
Despite the ease of administering and scoring, the ACE tool may have a lower potential to discriminate between different levels of students’ EBM competencies. The lack of correlation between test scores and levels of training maybe explained by the small sample size for third year students, and administration via paper-based test versus the online based test. The ACE uses dichotomous questions type, where even novice students could randomly guess answers and still score high.
Conclusions The ACE test was very easy to administer and score, compared to other validated EBM assessment tools, such as the Fresno. Students found it very useful as a learning resource, demonstrating the application of EBM steps of asking, acquiring, appraising and applying evidence to a realistic clinical scenario. It is feasible to administer the ACE tool as a formative assessment in undergraduate medical education. It is a valuable teaching tool to demonstrate the application of the first four steps of EBM to a clinical scenario. Further research is needed to compare feasibility and students’ performances in assessment tools and suggest a taxonomy of tools to guide EBM educators.
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.