Objectives The importance of teaching the skills and practice of Evidence-Based Medicine (EBM) for medical professionals has steadily grown in recent years. Alongside this growth is a need to evaluate the effectiveness of EBM curriculum on learners’ knowledge, skill, attitudes, competency and behaviour. A previous 2006 systematic review identified published evaluation instruments and tools focused predominantly knowledge, skills and attitudes, and few had been formally validated. There have been a number of tools published since the 2006 review, and at present there is no taxonomy of existing tools to aid EBM educators. The aim of this systematic review was to provide an up-to-date taxonomy and appraisal of assessment instruments that purport to evaluate learners’ EBM knowledge, skills, attitudes, competency and behaviour.
Method We searched MEDLINE, EMBASE, Cochrane library, Educational Resources Information Centre (ERIC), Best Evidence Medical Education (BEME) databases and references of retrieved articles published between January 2005 and March 2019 for assessment tools used to evaluate EBM teaching at any stage in medical education. Two reviewers independently performed data extraction and quality assessment. Quantitative and qualitative data on the development and description of the tool, number of participants, training level of participants, EBM domain(s) evaluated, level(s) of educational evaluation addressed, psychometric properties and feasibility were extracted. The quality of tools was assessed by the number of domains of EBM assessed, levels of educational evaluation addressed, robustness of psychometric testing and reporting of feasibility.
Results 155 of 1608 potentially relevant articles were assessed for full text review following title and abstract screening. Of these, 11 articles describing seven unique instruments met pre-defined criteria for inclusion. Together with those previously identified [n = 2], this presents a total of seven instruments for evaluation of EBM teaching in medical education. Level 1 tests such as the educational prescriptions (EP) and Assessing Competency in EBM (ACE) addressed at least three domains of EBM, two levels of educational evaluation, reported good discriminatory ability and feasibility (n = 2). The Fresno, Berlin, EBM test and Objective Structured Clinical Examination (OSCE) were categorised as level 2 (n = 4). The Biostatistics and Clinical Epidemiology Skills Assessment (BACES) addressed just one domain, two levels, reported no psychometric properties or feasibility and was categorised as level 3 (n = 1). Few instruments evaluated the application of EBM skills either in a simulated case scenario or in real clinical cases.
Conclusions We report our interim findings from this ongoing systematic review. Our review captured an additional five instruments, bringing the total of available instruments to seven. The majority (86%) of these has reasonable validity. Our review has identified educational prescriptions (EP) and ACE as level 1 tools; Fresno, Berlin, EBM test and OSCE are level 2 tools and BACES as level 3. Further development and validation of assessment tools that evaluate all the steps in EBM is needed. The findings from this systematic review will facilitate medical educators by offering a taxonomy of assessment tools to aid them with evaluation of their teaching and learning.
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.