Virtual trauma patient simulation design using the McGill Simulation Complexity Score (MSCS): a breakthrough in trauma education ================================================================================================================================ * Melina Deban * Sameena Iqbal * Andrew Beckett * Nancy Posel * David M. Fleiszer * Tarek Razek * Kosar Khwaja ## Abstract **Background:** Virtual patient simulations are interactive, computer-based cases. We designed scenarios based on the McGill Simulation Complexity Score (MSCS), a previously described objective complexity score. We aimed to establish validity of the MSCS and introduce a novel learning tool in trauma education at our institution. **Methods:** After design of an easy and difficult patient scenario, we randomized medical students and residents to each perform 1 of the 2 scenarios. We conducted a 2-way analysis of variance of training level (medical student, resident) and scenario complexity (easy, difficult) to assess their effects on virtual time, the number of steps taken in the scenario, beneficial and harmful actions, and the ratio of beneficial over harmful actions. **Results:** Virtual patient scenarios were successfully designed using the MSCS. Twenty-four medical students and 12 residents participated in the easy scenario (MSCS = 3), and 27 medical students and 12 residents did the difficult scenario (MSCS = 18). Though beneficial actions were similar between students and residents, sudents performed more harmful actions, particularly when the scenario was difficult. One virtual patient died in the easy scenario and 3 died in the difficult one (all medical students). Performance varied with level of complexity and there was significant interaction between level of training and number of steps, as well as with number of harmful actions. Decreasing performance with increasing level of complexity, as defined by the MSCS, suggests this score can accurately quantify difficulty. **Conclusion:** We established validity of the MSCS and showed its successful application on virtual patient scenario design. Virtual patient simulations are interactive, computer-based cases through which trainees can navigate by selecting various management options offered to them. As Cook and colleagues illustrated,1 in a world where trainees are expected to learn more with less time and financial resources, virtual patients provide an interesting alternative teaching method. The Association of American Medical Colleges defines a virtual patient as a “specific type of computer program that simulates real-life clinical scenarios; learners emulate the roles of health care providers to obtain a history, conduct a physical exam, and make diagnostic and therapeutic decisions.”2 In trauma, virtual patients provide an opportunity for learning in a stress-free interactive environment, using a widely available platform. In a separate study, we elaborated on and showed the inter-rater reliability of a trauma complexity simulation score, the McGill Simulation Complexity Score (MSCS).3 The MSCS rates the difficulty of simulation scenarios objectively on a scale from 0 to 20 by evaluating each component of the scenario according to the framework provided by the Advanced Trauma Life Support course, namely airway, breathing, circulation, disability and exposure (rated from 0–4). A cardiovascular arrest planned at any point in the scenario leads to an automatic maximal score of 20. In this study, we designed our simulated scenarios based on the MSCS. Using virtual patients, we aimed to establish the validity of our score, as well as introduce a novel learning method in trauma at our institution. We first hypothesized that training level would influence performance outcomes (i.e., residents would perform better than students) and, second, that difficulty, as defined by the MSCS, would also have an impact on these outcomes (i.e., trainees would perform better in the easy scenario than the difficult scenario). ## Methods Based on the MSCS, we designed 2 virtual simulation scenarios on a web-based platform (DecisionSim). They were designed using characteristics of different levels of the MSCS. For example, if we wanted to create a simulated patient with a breathing difficulty of 3, corresponding to B3 in the MSCS grid (Figure 1), we would incorporate a tension pneumothorax in the case. We created an easy patient scenario, with a low predetermined total MSCS (MSCS = 3), and a difficult patient scenario, with a high predetermined total MSCS (MSCS = 18) (Table 1). Both scenarios were reviewed by experts in virtual patient simulation. ![Fig. 1](http://canjsurg.ca/https://www.canjsurg.ca/content/cjs/66/2/E212/F1.medium.gif) [Fig. 1](http://canjsurg.ca/content/66/2/E212/F1) Fig. 1 Determination of the McGill Simulation Complexity Score. BSA = body surface area, GCS = Glasgow Coma Scale, HR = heart rate, RBC = red blood cells, RR = respiratory rate, SPB = systolic blood pressure, TRALI = transfusion-related acute lung injury. View this table: [Table 1](http://canjsurg.ca/content/66/2/E212/T1) Table 1 Virtual patient scenarios Figure 2 shows the case map of a scenario. Each box represents a question; learners navigate through the case according to the branching pattern that we designed. Figure 3 shows the learner’s view of each question. ![Fig. 2](http://canjsurg.ca/https://www.canjsurg.ca/content/cjs/66/2/E212/F2.medium.gif) [Fig. 2](http://canjsurg.ca/content/66/2/E212/F2) Fig. 2 Case map of the easy virtual patient scenario. ![Fig. 3](http://canjsurg.ca/https://www.canjsurg.ca/content/cjs/66/2/E212/F3.medium.gif) [Fig. 3](http://canjsurg.ca/content/66/2/E212/F3) Fig. 3 Example of a question asked of a trainee in a virtual patient scenario. ### Participants We recruited medical students and general surgery residents by email (with the faculty’s approval). We randomized participants to perform 1 of the 2 scenarios. Participation was on a voluntary basis and all data were deidentified. ### Statistical analysis We assessed the validity of the MSCS based on the expected influence of level of complexity on performance. Performance assessment was objective; specific scores were preassigned to management options and trainee performance was calculated by the DecisionSim platform. We collected data on the following outcomes: number of steps, virtual time, beneficial actions, harmful actions, ratio of beneficial to harmful actions and virtual patient deaths. The values were accessed on the website through password-protected case administration privileges. A step was defined as any question the trainee had to answer before reaching the conclusion. The branching pattern of the scenario allowed some trainees to reach the conclusion with fewer steps than others. The virtual time referred to the sum of the predetermined time allotted to each action, were it to happen in reality. We deemed virtual time to be superior to real time, considering that real time may have been influenced by many factors outside the author’s control, such as the trainee taking a break without closing the browser tab. We regarded any action leading to benefit to the patient as a beneficial action, and any invasive procedure having a negative impact on the patient’s care as a harmful action. We also calculated the ratio of beneficial to harmful actions. This outcome allowed accurate performance comparison between the easy and the hard virtual patient scenarios, in particular because the difficult scenario was, by definition, longer than the easy scenario. Virtual patient death was defined as a series of 3 consecutive harmful actions in the scenario, after which the virtual patient died and the case ended. We used 2-way analysis of variance (ANOVA) of training level (medical student, resident) and scenario complexity (easy, difficult) to assess their effects on virtual time, the number of steps taken in the scenario, beneficial and harmful actions and the ratio of beneficial to harmful actions. Each of the 4 groups (medical students who did the easy scenario, medical students who did the difficult scenario, residents who did the easy scenario and residents who did the difficult scenario) was analyzed in a 2-way mixed factorial ANOVA, with training level manipulated as a within-subjects variable and scenario difficulty as a between-subjects variable. We also reported the interaction between level of complexity and level of training. We conducted analyses using SAS 9.3 version software. ### Ethics approval This study was reviewed by the Institutional Review Board of McGill University and granted ethics approval. ## Results The total number of participants was 75. Twenty-four medical students and 12 general surgery residents (first year to fifth year) did the easy scenario, and 27 medical students and 12 residents did the difficult scenario. The main effect of training level on steps taken was statistically significant (*F*1,71 = 8.08, *p* = 0.0058). Medical students took more steps (mean 120.0) than residents (mean 106.9). We observed a significant interaction between training level and scenario complexity (*p* = 0.0064) (Table 2). When the scenario was difficult, students took more steps than residents, but when the scenario was easy, the students took a similar number of steps as the residents. View this table: [Table 2](http://canjsurg.ca/content/66/2/E212/T2) Table 2 Two-way analysis of variance of training level and scenario difficulty The main effect of training level on beneficial steps was not statistically significant. However, the main effect of training level on harmful steps did show statistical significance (*F*1,71 = 6.71, *p* < 0.0001). Medical students took more harmful steps (mean 13.2) than residents (mean 8). A significant interaction between training level and scenario difficulty was found (*p* < 0.0001) (Table 2). Though the number of beneficial actions was similar between students and residents, the former always performed more harmful actions and significantly more so when the scenario was difficult. When we explored the ratio of beneficial to harmful steps, the main effect of training level on the ratio was not statistically significant (*F*1,71 = 2.53, *p* = 0.1162), but the main effect of scenario complexity was statistically significant (*F*1,71 = 24.3, *p* < 0.0001). The ratio was lower among participants who completed the complex scenario than those who completed the easy scenario. No interaction was detected (*p* = 0.9138) (Table 2). For each harmful action performed, the number of beneficial actions did not vary significantly by training level, but the number did decrease significantly when the scenario was difficult. Interestingly, there was a statistically significant association between level of training and virtual time (*F*1,71 = 8.40, *p* = 0.0050). Medical students took more time to navigate through the scenario than residents (median 6410 v. 4780 s). The scenario complexity was also significantly associated with virtual time (*F*1,71 = 5.8, *p* = 0.0187), with the difficult scenario taking more time (6930 v. 4602.5 s). There was no interaction between the scenario difficulty and training level. Death as a scenario outcome occurred only in the medical student group, including 1 death in the easy scenario and 3 deaths in the difficult one. There were no virtual patient deaths for residents. ## Discussion We chose virtual patient simulation to determine the validity of the MSCS by illustrating the influence of level of complexity on performance. Training level did not significantly affect beneficial actions, the ratio of beneficial to harmful actions or the number of steps. Notably, medical students performed similarly to general surgery residents in the easy scenario, which may reflect the fact that medical students were concurrently studying for clerkship examinations. They were most likely doing so at a level appropriate for the performance of the easy scenario, explaining why outcomes were similar between residents and students who completed the easy scenario. We noted a significant difference in harmful actions between students and residents. In the difficult scenario, students did almost twice the number of harmful actions as residents. Therefore, although the difference in favour of the residents was not significant in terms of beneficial actions, there was a significant reduction in the number of harmful actions with increasing level of training. When analyzing the effect of scenario complexity, we observed significant differences in beneficial actions, harmful actions and the ratio of beneficial to harmful actions. This ratio, the outcome best suited for comparing the scenarios, was remarkably higher for the easier scenario. The residents nearly doubled the number of harmful actions posed when faced with the more complex virtual patient, and the students tripled them. Our results showed that, as expected, performance declines with increasing complexity as measured by the MSCS. These findings support the use of the MSCS as a scoring tool. Performance declined with increasing MSCS, as anticipated. We have therefore established the validity of the MSCS using virtual patient scenarios. Skills learned in the simulation setting are better acquired and retained, compared with didactic teaching. This was illustrated in a study of 216 Swedish medical students that explored hematology and cardiology topics either using a standardized patient, a traditional method or both. Standardized patients were a superior approach to traditional teaching in both learning and evaluation.4 The same group also showed that exercises involving virtual patients resulted in better knowledge retention.5 Similarly, Cherry and colleagues7 showed improved medical student performance with human patient simulators in trauma. They reported that the students subjectively preferred the simulator to traditional training.6 Virtual patient learning has been successfully implemented in American and Canadian medical schools.7 The use of virtual patients as an educational tool has some drawbacks, such as the lack of human contact and the reality of in-hospital patients. However, according to Cook and colleagues,1 the place of virtual patients lies in enforcing clinical reasoning, defined as “the application of knowledge to collect and integrate information from various sources to arrive at a diagnosis and management plan.” The development of clinical reasoning relies on numerous and various case examples provided in the form of virtual patients. On the continuum of competency, virtual patients come after the consolidation of core knowledge through small groups and lecture, and right before the standardized patient aimed at teaching how to take a patient history and conduct examination.8 To facilitate building of clinical reasoning, care should be given to the format of design of the virtual patient. Cook and colleagues1 provide examples of features to be considered when creating a scenario, including interactivity, case progression, knowledge of diagnosis and feedback and instruction, among others. However, even when taking these into account, a problem remains: trainees at varying levels are at different stages of learning in terms of their knowledge base or number of cases retained.9 This is where we believe that case design based on the MSCS can be very useful. It can personalize the complexity of a trauma scenario to a given level of training so that the acquisition of clinical reasoning is optimized. The MSCS can also address gaps in knowledge.1 The MSCS framework can be used to identify precisely which component is troublesome to the trainee. For example, if trainees have difficulty with an A0, B1, C1, D0, E3 case, we can tailor the trauma curriculum to include more discussion on major extremity injuries. By structuring scenario design, the MSCS can be used to identify which learning points should be emphasized. Morever, the virtual patient scenarios constructed with the MSCS are well-reviewed by learners in terms of location and time flexibility, interactivity, stress-free context, goal-directedness and ease of use. Our scoring system of virtual patient scenarios, using beneficial and harmful action, is of interest namely for assessment of competency in medicine. The ratio of beneficial to harmful actions provides a relatively simple measurement of overall quality of a trainee’s decision-making ability. This could be used in training programs and other learning environments for evaluation purposes, but also to highlight areas of possible improvement. We believe that our novel score can be applied to the design of virtual patient scenarios, a promising adjunct to current learning strategies employed in medical education.8–10 Together with high-fidelity simulation, interactive computer-based learning can become a new method of optimizing knowledge retention in trauma education. Future directions include the determination of the MSCS difficulty of trauma simulations that is appropriate for junior or senior residents to demonstrate their competency. This would aid in tailoring their trauma education and performance evaluation, and could even be incorporated into competency-based learning. Furthermore, the MSCS framework could be used in the creation of high-fidelity mannequin simulation scenarios, as well as live mass casualty simulation. Further work in the validation of the MSCS in these scenarios needs to be conducted. ### Limitations The complex nature of the difficult scenario made it longer, meaning there was more potential for beneficial and harmful actions to be done. We addressed this issue by using a ratio of beneficial to harmful actions as an outcome. Other than level of training, factors influencing a higher number of harmful actions include comfort, anxiety, presence of higher executive functional abilities and fatigue. Our study did not account for these factors. Lastly, it is unclear at this stage how many virtual trauma simulations should be performed to translate into clinical improvement in bedside experiences. ## Conclusion The MSCS framework can be used to design and assess trauma simulation on various platforms and is a breakthrough in trauma education. In the virtual patient scenarios designed with the MSCS, performance varied with level of complexity, as expected, and there was significant interaction with level of training for 2 performance outcomes, namely number of steps and number of harmful actions. Moreover, decreasing performance with increasing level of complexity, as defined by the MSCS, suggests our score can accurately quantify difficulty. As such, we were able to establish the validity of our score and demonstrate that the MSCS truly reflects varying levels of complexity. ## Footnotes * Presented at the American College of Surgeons Clinical Congress, San Francisco, Calif. Oct. 26–30, 2014. * **Competing interests:** None declared. * **Contributors:** All of the authors contributed to the conception and design of the work. M. Deban, S. Iqbal and N. Posel contributed to the acquisition, analysis, and interpretation of data. M. Deban drafted the manuscript. All of the authors revised it critically for important intellectual content, gave final approval of the version to be published and agreed to be accountable for all aspects of the work. * Accepted May 25, 2021. This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY-NC-ND 4.0) licence, which permits use, distribution and reproduction in any medium, provided that the original publication is properly cited, the use is noncommercial (i.e., research or educational use), and no modifications or adaptations are made. See: [https://creativecommons.org/licenses/by-nc-nd/4.0/](https://creativecommons.org/licenses/by-nc-nd/4.0/) ## References 1. Cook DA, Triola MM. Virtual patients: a clinical literature review and proposed next steps. Med Educ 2009;43:303–11. [CrossRef](http://canjsurg.ca/lookup/external-ref?access_num=10.1111/j.1365-2923.2008.03286.x&link_type=DOI) [PubMed](http://canjsurg.ca/lookup/external-ref?access_num=19335571&link_type=MED&atom=%2Fcjs%2F66%2F2%2FE212.atom) 2. Effective use of educational technology in medical education: summary report 2006 AAMC Colloquium on Educational Technology. Washington (DC): Association of American Medical Colleges (AAMC); 2007:1–7. 3. Khwaja K, Deban M, Iqbal S, et al. The McGill simulation complexity Score (MSCS): a novel complexity scoring system for simulations in trauma. Can J Surg 2023;66:E206–11. 4. Botezatu M, Hult H, Tessma MK, et al. Virtual patient simulation: knowledge gain or knowledge loss? Med Teach 2010;32: 562–8. [CrossRef](http://canjsurg.ca/lookup/external-ref?access_num=10.3109/01421590903514630&link_type=DOI) [PubMed](http://canjsurg.ca/lookup/external-ref?access_num=20653378&link_type=MED&atom=%2Fcjs%2F66%2F2%2FE212.atom) 5. Botezatu M, Hult H, Tessma MK, et al. Virtual patient simulation for learning and assessment: superior results in comparison with regular course exams. Med Teach 2010;32:845–50. [CrossRef](http://canjsurg.ca/lookup/external-ref?access_num=10.3109/01421591003695287&link_type=DOI) [PubMed](http://canjsurg.ca/lookup/external-ref?access_num=20854161&link_type=MED&atom=%2Fcjs%2F66%2F2%2FE212.atom) 6. Cherry RA, Ali J. Current concepts in simulation-based trauma education. J Trauma 2008;65:1186–93. [CrossRef](http://canjsurg.ca/lookup/external-ref?access_num=10.1097/TA.0b013e318170a75e&link_type=DOI) [PubMed](http://canjsurg.ca/lookup/external-ref?access_num=19001992&link_type=MED&atom=%2Fcjs%2F66%2F2%2FE212.atom) 7. Huang G, Reynolds R, Candler C. Virtual patient simulation at U.S. and Canadian medical schools. Acad Med 2007;82:446–51. [CrossRef](http://canjsurg.ca/lookup/external-ref?access_num=10.1097/ACM.0b013e31803e8a0a&link_type=DOI) [PubMed](http://canjsurg.ca/lookup/external-ref?access_num=17457063&link_type=MED&atom=%2Fcjs%2F66%2F2%2FE212.atom) [Web of Science](http://canjsurg.ca/lookup/external-ref?access_num=000246171800004&link_type=ISI) 8. Charlson ME, Pompei P, Ales KL, et al. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis 1987;40:373–83. [CrossRef](http://canjsurg.ca/lookup/external-ref?access_num=10.1016/0021-9681(87)90171-8&link_type=DOI) [PubMed](http://canjsurg.ca/lookup/external-ref?access_num=3558716&link_type=MED&atom=%2Fcjs%2F66%2F2%2FE212.atom) [Web of Science](http://canjsurg.ca/lookup/external-ref?access_num=A1987G855900002&link_type=ISI) 9. Youngblood P, Harter PM, Srivastava S, et al. Design, development, and evaluation of an online virtual emergency department for training trauma teams. Simul Healthc 2008;3:146–53. [CrossRef](http://canjsurg.ca/lookup/external-ref?access_num=10.1097/SIH.0b013e31817bedf7&link_type=DOI) [PubMed](http://canjsurg.ca/lookup/external-ref?access_num=19088658&link_type=MED&atom=%2Fcjs%2F66%2F2%2FE212.atom) 10. Lee SK, Pardo M, Gaba D, et al. Trauma assessment training with a patient simulator: a prospective, randomized study. J Trauma 2003;55:651–7. [PubMed](http://canjsurg.ca/lookup/external-ref?access_num=14566118&link_type=MED&atom=%2Fcjs%2F66%2F2%2FE212.atom) [Web of Science](http://canjsurg.ca/lookup/external-ref?access_num=000185991700012&link_type=ISI)