High Fidelity Simulation Use In Nursing Education and Curriculum Design

Nurses Educator 2

Program Evaluation and Use of Simulators In Nursing Education

High Fidelity Simulation Use In Nursing Education and Curriculum Design

Use of Simulator In Program Evaluation,Curriculum Design and Simulators Use In Nursing Education,Gold Standard In Evaluation of Simulator Learning,Discussion On Simulation Use.

Use of Simulator In Program Evaluation

    The use of high fidelity simulation (HPS) in nursing education has been described as simulation that incorporates a computerized full body manikin that can be programmed to provide a realistic physiological response to student actions (Cant & Cooper, 2010, p. 4). Program evaluation of simulation is a methodological data collection and review process intended to inform decisions and improve program efficacy.

Curriculum Design and Simulators Use In Nursing Education

    Integration of simulation across the curriculum represents one of the several “best practices in simulation pedagogy (Mc Gaghie, Issenberg Petrusa, & Scalese, 2010). The absence of a consistent framework to measure and evaluate simulation associated outcomes is a major challenge to incorporate simulation into. nursing curricula (Onello & Regan, 2013). 

    Historically, evaluation of simulation in education has emphasized the measurement of outcomes (ie, knowledge gain and skill acquisition) and comparisons of simulation with standard teaching methods (Kable. Arthur, Levett Jones, & Reid Searl, 2013) . Specific reports of curricular or program level evaluation of simulation are extremely limited (Arthur, Levett Jones, & Kable, 2013; Kable et al. 2013; Schiavenato, 2009) Exploration to design and develop standardized methods of simulation outcome measurement has been called for at national and international levels (Cant & Cooper, 2010)

Gold Standard In Evaluation of Simulator Learning

    Although a gold standard for evaluation of simulation has not appeared in the literature (Kardong Edgren, Adamson, & Fitzgerald, 2010), researchers are studying systematic assessment methods and developing standardized tools for evaluating simulation (Elfrink Cordi, Leighton, Ryan Wenger ,Doyle, & Ravert, 2012).The nursing education simulation framework (NESF) was developed through a partnership with the National League for Nursing and the Laerdal Corporation. Jeffries (2005) published the framework for designing, implementing, and evaluating simulations in nursing education.

    Constructs of the framework include teacher student factors, educational practices, student outcomes, and design characteristics. Wide variance in the volume and strength of evidence in support of the constructs and sub components was found in a recent review of the literature (Groom, Henderson, & Sittner, 2013).Sportsman, Schumacker, and Hamilton (2011) used the perception of clinical competence model as a framework to study the impact of simulation in an undergraduate nursing curriculum using students sense of clinical competence and other metrics of ucca. 

    A five variable model with input (student faculty characteristics) and process variables (learning strategies and clinical learning environment) explained BSN students' perceptions of their clinical competence. Sportsman et al. used this model and three instruments to collect, explore, and interpret data from BSN and AD students participating in a series of courses using simulation experiences Acceptable reliability coefficients were reported for the three instruments Sportsman, Schumacker, & Hamilton, 2011). 

    Simulation allows for evaluation of student behaviors in the effective, cognitive, and psychomotor learning domains, and assessment should address learning in these domains (Sando et al. 2013) Reviewing extant literature, Kardong-Edgren, Adamson, and Fitzgerald (2010) identified 22 simulation evaluation tools and acknowledged four tools most closely addressing the three learning domains simultaneously.Davis and Kimble (2011) identified six rubrics for use in measurement of simulation-related objectives and appraised how well these quantified students meet curricular outcomes expected of baccalaureate nursing programs as articulated in The Essentials of Baccalaureate Education for Professional Nursing Practice (American Association of Colleges of Nursing [AACND]. 

    Stating that although rubrics “structured around meeting the essentials have the ability to provide evidence of positive student learning outcomes via HPS outcomes evaluation” (Davis & Kimble, 2011, p. 609), the authors acknowledge none of the evaluated rubrics; however, addressed the three domains of learning and measured all nine AACN essentials. A majority was in pilot form and required additional psychometric development.The Program for Nursing Curriculum Integration (PNCI), developed by Medical Education Technologies Inc. (METI, Sarasota. FL), is a comprehensive educational package incorporating the use of simulation designed to help educators integrate simulation broadly throughout nursing curricula. 

    The program requires the iSTAN simulator and uses an evaluation instrument (Simulation Effectiveness Tool [SET]), which measures simulation effectiveness via student self-report of confidence and learning. Although Schiavenato (2009) states “an important step in the evaluation of simulation in nursing education is divorcing it from any particular product” (p. 390) the SET has been psycho-metrically tested in a multisite national study and found to be a reliable instrument (Elfrink Cordi et al., 2012). 

    Arthur, Levett Jones, and Kable (2013) raised international consensus among simulation experts and developed 15 evidence based quality indicator statements to guide evaluation of simulation implementation within nursing curricula (Arthur et al., 2013, p. 1361) To evaluate simulation quality in undergraduate nursing curriculum, Kable, Arthur, Levett Jones, and Reid Seari (2013) constructed a set of tools from the quality indicators. One tool, the Student Evaluation Instrument, has been described as applicable across simulation practices, program levels, and curricula and holds promise for use in interdisciplinary simulation evaluation settings (Kable et al. 2013).

    Testing of the other instruments has been undertaken to expedite refinement of the quality indicators for evaluation of simulation experiences (Arthur et al, 2013).In 2011, the International Nursing Association for Clinical Simulation and Learning (INACSU) published seven standards of best practice in simulation. One of these performance standards specifically addresses outcome evaluation and assessment of simulation based experiences (Sando et al, 2013). 

    The evaluation standard includes detailed guidelines for formative, summative, and high stakes evaluation of simulation experiences. The standard identifies priority areas for development of an evaluation framework across the gamut of use to address inconsistent outcome definitions, non reporting of instrument psychometric properties, and lack of methodological rigor noted in the simulation literature (Onello & Regan, 2013).

Discussion On Simulation Use

    As this discussion suggests, simulation evaluation instruments are indeed evolving, but have not assessed all elements concomitant with the quality implementation of simulation across nursing curricula (Kable et al., 2013). A gold standard for evaluation of simulation remains elusive, and simulation related outcome measurement issues persist as one of today's greatest pedagogical challenges (Mc Gaghie et al., 2010).

    As simulation gains wider use in nursing education, evaluation efforts must focus on the instruments that currently show the most promise for further refinement and development (Kardong Edgren et al. 2010), so that a standardized or universal method of outcome measurement for quality implementation in curricular integration of simulation can emerge.

    The Simulation Evaluation Instrument, developed by Todd, Manz, Hawkins, Parsons, and Hercinger (2008) and briefly presented here in work by Davis and Kimble (2011) and Kardong Edgren et al. (2010), may be positioned for further development in that the tool currently evaluates simulated learning across three learning domains, quantifies meeting of curricular outcomes as articulated in AACN Essentials, and possesses robust psychometric properties.

    Recognizing that evaluation methods must be guided by a broader re-conceptualization of simulation encompassing varied methods used throughout nursing curricula, the NESP (Jeffries, 2005) hold promise for further development. The framework and its essential constructs and sub constructs have been suggested to provide a foundation and serve as a fundamental guide for evaluation of the evolving methodology of simulation-based education (Groom et al. 2013).

Post a Comment


Give your opinion if have any.

Post a Comment (0)

#buttons=(Ok, Go it!) #days=(20)

Our website uses cookies to enhance your experience. Check Now
Ok, Go it!