Associate Professor of Emergency Medicine University of Virginia Charlottesville, Virginia, United States
Objectives: To assess the effect of a novel electronic format on the frequency of evaluations completed by faculty and residents for medical students in the emergency department (ED).
Background: Faculty and residents’ assessments of medical students working in the ED play an integral role in helping students improve and in providing input for students’ grades and standardized letters of evaluation for their residency applications. Despite the importance of these evaluations, students and clerkship directors often report that they do not receive a sufficient evaluations to provide meaningful assessments. The aim of this study was to determine if an electronic evaluation system would improve the frequency of submitted evaluations and the quantity of information submitted.
Methods: This was a prospective observational study at a single academic ED from 2019-2022 with an advanced clerkship elective for senior medical students in emergency medicine and an advanced elective in pediatric emergency medicine. Evaluations were performed utilizing a modified version of the National Clinical Assessment Tool for Medical Students in the Emergency Department. Prior to the intervention, residents and faculty were asked to complete paper evaluations on students after every shift in the ED and submit them to a locked box in the ED. In the beginning of academic year 2020, a new electronic evaluation format for the evaluation was provided as a Google Form. It was accessible by a hyperlink or QR code that was given to all students and posted in the ED. Descriptive and comparative statistics were calculated. A sensitivity analysis was performed to assess the impact of Covid on results.
Results: Over the three-year period, 172 students rotated in the ED, and 718 evaluations were submitted. Students worked approximately 2,924 shifts and received submitted evaluations from 22% of these shifts. With the paper format students received a mean of 2.8 (sd=2.1) evaluations for their month-long rotation compared to 5.7 (sd=3.9) evaluations with the electronic format (p < 0.001). Resident evaluations increased more than attending evaluations following the implementation of an electronic format; a mean of 2.1 resident evaluations per student utilizing the paper form at and 4.1 evaluations using the electronic format (p < 0.05). Most electronic evaluations were accessed by the hyperlink (70%), followed by QR code (27%) and direct email (3%). The mean number of discrete comments included via free text on each evaluation increased from a median of 1 (IQR: 0-2) with the paper format to a median of 4 (IQR: 3-5) with the electronic format. A sensitivity analysis with exclusion of data from the 12 months at the height of the Covid pandemic did not reveal any significant changes in the reported associations between the format of the evaluation and the frequency of submission.
Conclusions: An electronic format was associated with more frequent submission of ED shift evaluations of medical students and also more content in the evaluations. As an observational study there are potentially unmeasured confounders that may have impacted the results. In addition, while the number of evaluations increased, the quality of the evaluations was not assessed.