Outcomes and Scope of Summative Evaluation In Nursing Education

Afza.Malik GDA
0

Evaluation of Outcomes of Summative In Nursing Education

Outcomes and Scope of Summative Evaluation In Nursing Education


Scope and Outcomes of Evaluation In Nursing Education,Outcomes of Summative Assessment.

Outcomes of Summative Assessment

    The purpose of outcome evaluation (summative evaluation) is to determine the effects of teaching efforts. Outcome (summative) evaluation measures the changes that result from teaching and learning. This type of evaluation summarizes what happened based on the education intervention.Guiding questions in out-come evaluation include the following: 

  • Was teaching appropriate? 
  • Did the individual(s) learn?
  • Were behavioral objectives met? 
  • Did the patient who learned a skill before discharge use that skill correctly once home? 
  • Did the student nurse who acquired a new skill in a laboratory setting or the staff nurse who learned a new skill in a continuing education session demonstrate the ability to independently perform that skill accurately in practice?

    Unlike process evaluation that occurs currently with the teaching learning experience, outcome evaluation occurs after teaching has been completed or after an educational program has been carried out. Outcome evaluation measures the changes that result from teaching and learning. 

Scope and Outcomes of Evaluation In Nursing Education

    Abruzzese (1992) clearly explains the difference in scope between outcome evaluation and content evaluation. She notes that outcome evaluation measures more long-term change that “persists after the learning experience” (p. 243). Changes can include institution of a new process, habitual use of a new technique or behavior, or integration of a new value or attitude. Which changes the nurse educator measures usually are dictated by the objectives established based on the initial needs assessment. 

    Thus, the scope of outcome evaluation focuses on a longer time period than does content evaluation. whereas evaluating the accuracy of a patient's return demonstration of a skill prior to discharge may be appropriate for content evaluation, outcome evaluation should include measuring a patient's competency with a skill in the home setting after discharge. 

    Similarly, nurses' responses on a workshop posttest may be sufficient for content evaluation, but if the work shop objective states that nurses will be able to incorporate their knowledge into practice on the unit, outcome evaluation should include measuring nurses' knowledge or behavior at some time after they have returned to the unit. Abruzzese (1992) suggests that outcome data be collected 6 months after the original baseline data to determine whether a change has really taken place. 

    Resources required for outcome evaluation are costly and complex compared to those needed for process or content evaluation, Compared to the resources required for the first two types of evaluation in the RSA model, outcome evaluation requires knowledge of how to establish baseline data, greater expertise to develop measurement and data collection strategies, more time to conduct the evaluation, and the ability to collect reliable and valid data for comparative purposes after the learning experience has occurred. 

    Postage to mail surveys and time and personnel to carry out observation of nurses on the clinical unit or to complete patient/family telephone interviews are specific examples of resources that may be necessary to conduct an outcome evaluation.From an EBP perspective, outcome evaluation might arguably be considered as “where the rubber meets the road.” 

    Once a need for change has been identified, the search for evidence on which to base subsequent changes commonly begins with a structured clinical question that will guide an efficient search of the literature. This question is also known as a PICO question, where the letters PI C, and O stand for population (patient, family member, staff, or student), intervention, comparison, and outcome, respectively. 

    For example, nurses caring for an outpatient population of adults with heart failure might dis- cover that many patients are not following their prescribed treatment regimen. Upon questioning these patients, the nurses learn that most patients do not recognize symptoms resulting from failure to take their medications on a consistent basis. 

    To search the literature efficiently for ways in which they might better educate their patients, the nurses would pose the following PICO question: Does nurse directed patient education on symptoms and related treatment for heart failure provided to adult outpatients with heart failure result in improved compliance with treatment regimens? 

    In this example, the P is the population of adult outpatients with heart failure, the I is the nurse-directed patient education intervention on symptoms and related treatment for heart failure, the Cis the comparison of the education currently being provided ( or lack of education, if that is the case), and the O is the outcome that, it is hoped, will result in improved compliance with treatment regimens. 

    The Skin Protection for Kids program (Walker, 2012) described earlier in this chapter included outcome evaluation as well as the content evaluation. whereas content evaluation of teachers' knowledge was conducted 24 to 48 hours after the educational activity, outcome evaluation took place several months later to measure whether sun-safety practices were implemented by children and parents who were enrolled in this educational program. 

    Prior to making a change in practice, especially if that change will require additional resources or might increase patient risk if unsuccessful, a review of several well conducted studies providing external evidence directly relevant to a PICO question should be conducted. 

    Implementation of the Skin Protection for Kids program (Walker, 2012) is an excellent example of a practical change based on review and appraisal of extensive external evidence, which included a critique of 39 studies plus peer reviewed guidelines and systematic reviews that focused on sun-safety measures for children. 

    Ferrara, Ramponi, and Cline (2016) conducted an outcome evaluation 2 months after an educational intervention intended to increase physicians' and nurses' knowledge and compliance as well as to change their attitudes about family presence during resuscitation in the emergency department. Follow up observations demonstrated that families were present during resuscitation 87.5% of the time when staff had received the educational intervention versus only 23% of the time when staff had not received the education. 

    Another example of an outcome evaluation was a study conducted by Sumner et al. (2012) to determine whether nurses completing a basic arrhythmia course retained knowledge 4 weeks after course completion and accurately identified cardiac rhythms 3 months later. An initial content evaluation demonstrated that the 62 nurses who completed the course improved their short-term knowledge from pretest to post test to a statistically significantly degree (p<0.01). 

    Nurses' scores on a simulated arrhythmia experiment conducted 3 months later demonstrated no significant change from post test scores obtained immediately after course completion. Ideally, an outcome evaluation to answer the question “Were nurses who completed a basic arrhythmia course able to use their skills in the practice setting?” should be conducted by directly observing those nurses during patient care. 

    However, given the logistical challenges of unobtrusively observing nurses when patients are experiencing the arrhythmia included in the course, the use of simulation may be considered a feasible alternative so long as the educator remembers that simulation is merely a proxy for reality. 

Impact Evaluation 

    The purpose of impact evaluation is to determine the relative effects of education on the institution or the community. Put another way, the purpose of impact evaluation is to obtain information that will help decide whether continuing an educational activity is worth its cost (Adams, 2010). 

    Examples of questions appropriate for impact evaluation include “What is the effect of an orientation program on subsequent nursing staff turnover?” and “What is the effect of a cardiac discharge teaching program on long-term frequency of re-hospitalization among patients who have completed the program?”The scope of impact evaluation is broader, more complex, and usually more long term than that of process , content, or outcome evaluation. 

    For example, whereas outcome evaluation focuses on whether specific teaching results in achievement of specific outcomes, impact evaluation goes beyond that point to measure the effect or worth of those outcomes. In other words, outcome evaluation focuses on a learning objective, whereas impact evaluation focuses on a goal for learning. Consider, for instance, a class on the use of body mechanics. 

    The objective is that staff members will demonstrate proper use of body mechanics in providing patient care. The goal is to decrease back injuries among hospitals' direct-care providers. As another example, consider a teaching session on healthy food choices for patients who have had bariatric sur gery. The objective is that patients will choose healthy foods regardless of whether they are in a restaurant or in the grocery store. The goal is for these patients to increase and sustain their weight loss. 

    This distinction between outcome and impact evaluation may seem subtle, but it is important to the appropriate design and conduct of an impact evaluation.Good impact evaluation is like good science: rarely inexpensive and never quick. 

    The resources needed to design and conduct an impact evaluation generally include reliable and valid instruments, trained data collectors, personnel with research and statistical expertise, equipment and materials necessary for data collection and analysis, and access to populations who may be culturally or geographically diverse. 

    Ching. Forte, Aitchison, and Earle (2015) describe an impact evaluation of interprofessional education for physicians and nurses who worked in 26 primary care practices managing 4,167 patients with diabetes. An evaluation 15 months after the education intervention of these professionals revealed that a significantly higher (p=0.0001) proportion of patients had achieved HbA1c targets. 

    Healthcare professionals' confidence and collaborative behavior were sustained for at least 3 years after completing the education. These characteristics exemplify the scope and time frame commonly found with this type of evaluation. Also, an example illustrating how an impact evaluation can be global in nature is provided by Padian et al. (2011) in their discussion of challenges facing those persons who conduct large scale evaluations of combination HIV prevention programs. 

    Because impact evaluation requires many resources, including time, money, and research expertise, this type of evaluation is usually beyond the scope of the individual nurse educator. Conducting an impact evaluation may seem to be a monumental task, but this reality should not dissuade the determined educator from the effort. Rather, one should plan well in advance, proceed carefully, and obtain the support and assistance of stakeholders and colleagues. 

    Keeping in mind the purpose for conducting an impact evaluation should be helpful in maintaining the level of commitment needed throughout the process. The current managed care environment requires justification for every health dollar spent. The importance of patient and staff education may be intuitively beneficial in improving the quality of care, but evidence of the positive impact of education must be demonstrated if it is to be recognized, valued, and funded. 

    A literature search conducted to determine the state of the evidence on impact evaluation in patient education found that the term impact is used generically to describe both evaluations of patient outcomes resulting from education and evaluations of long-term effects from education. What is important to remember when reviewing this literature is not which term the authors use, but what their purpose for evaluation is. 

    As noted earlier, the purpose of an outcome evaluation is to determine whether an educational intervention results in the intended behavior change, whereas the purpose of an impact evaluation is to determine whether long-term education goals are met. As the importance of EBP and practice based evidence continues to grow, impact evaluations are becoming recognized as essential for examining the long term effectiveness of different educational interventions used to disseminate practice guidelines to healthcare providers (Ammerman, Smith, & Calancie, 2014; Boivin et al., 2010).

Total Program Evaluation

Within the framework of the RSA model (Abruzzese, 1992), the purpose of total program evaluation is to determine the extent to which all activities for an entire department or program over a specified time meet or exceed the goals originally established. In turn, goals for the department or program are based on goals for the larger organization (DeSilets, 2010). 

    Ammerman and colleagues (2014) extend program evaluation even further, stating that program evaluation strategies to address broad public health issues began introducing practice-based evidence even before the term “practice based evidence” was identified. 

    Guiding questions appropriate for a total program evaluation from this perspective might be “To what extent did programs under taken by members of the nursing staff development department during the year accomplish annual goals established by the department?” and “How well did patient education activities implemented throughout the year meet annual goals established for the institution's patient education program?”

    The scope of program evaluation is broad, generally focusing on overall goals rather than on specific learning objectives. Given its scope, total program evaluation is also complex, usually focusing on the learner and the teacher and the educational activity, rather than on just one of these three components. 

    Abruzzese (1992) describes the scope of program evaluation as compassing all aspects of educational activity (eg, process, content, outcome, impact) with input from all the participants (eg, learners, teachers, institutional representatives, and com municipal stakeholders). 

    It is not surprising, then, that quite a few other models and related theories have been developed to conceptualize and organize total program evaluation. Kirkpatrick's four level model, known as the logic model, consists of four components: inputs, activities, outputs, and outputs (Rouse, 2011). 

    Stufflebeam and Zhang (2017) describe how the CIPP model can be used to evaluate program improvement and accountability, Zhang and Cheng (2012) developed the planning, development, process, and product (PDPP) model to systematically evaluate e-learning in all educational institutions in China and Hong Kong. 

    The 26 items included in their model range from initial market demand for an e-learning program to technical support throughout the educational activity to both teaching and learning effectiveness once education is completed. 

    Frye and Hemmer (2012) have authored a guide for educators to use in choosing a program evaluation model that is theoretically and practically consistent with the purpose for and scope of evaluation. 

    These authors' intent is to help those evaluating educational programs to appreciate and adequately account for how complex program evaluation really is Kirkpatrick's logic model for program evaluation has been popular for more than 2 decades and remains perhaps the most frequently used model in federal program evaluations (eg, the Centers for Disease Control and Prevention's Morbidity and Mortality Weekly Report) as well as in the development of evaluation guidelines by many well known nongovernmental organizations, such as the WK Kellogg Foundation and the United Way (Frye & Hemmer, 2012; Torghele et al., 2007). 

    Consistent with its roots in general system theory, the logic model views an education program as “a social system composed of component parts, with interactions and interrelations among the component parts, all existing within, and interacting with, the program’s environment” (Frye & Hemmer, 2012, p. 290). 

    Another recent use of Kirkpatrick's model for program evaluation is described by Nocera et al. (2016) in their report of a statewide nurse training program to prevent infant abuse. Standardized training developed by the National Center on Shaken Baby Syndrome was provided to nurses in 85 hospitals and one birthing center in North Carolina from 2008 to 2010 in preparation for statewide adoption. Satisfaction with training content and methods as well as long-term adherence were among evaluative findings. 

    Phillips, Hall, and Irving (2016) describe use of the logic model to evaluate inter professional education of practitioners from mixed health professional backgrounds who provide care to patients with both psychological and medical comorbid illness. Evaluation included observation, surveys, and network analysis conducted before, immediately after, and 3 months after completion of training. 

    Results demonstrated that confidence and knowledge increased immediately after training and these increases were sustained 3 months later among members of seven healthcare disciplines. In addition, physicists sustained increased use of motivational interviewing after 3 months. 

    As stated earlier, the RSA model developed by Abruzzese (1992) remains useful as a general framework for categorizing the basic types of evaluation: process, content, outcome, impact, and total program. As depicted in this model. differences between these types are largely a matter of degree. For example, process evaluation occurs most frequently; total program evaluation occurs least frequently. 

    Content evaluation focuses on immediate effects of teaching; impact evaluation concentrates on more long-term effects of teaching. Conducting process evaluation requires fewer resources compared with impact and program evaluation, which require extensive resources for their implementation. The RSA model further illustrates one way that process, content, outcome, and impact evaluations can be considered together as components of total program evaluation. 

    According to Abruzzese (1992), resources required for total program evaluation may include the sum of resources necessary to conduct process, content, outcome, and impact evaluations. A program evaluation may require significant expenditures for personnel if the evaluation is conducted by an individual or team external to the organization. Additional resources required may include time, materials, equipment, and personnel necessary for data entry, analysis, and report generation. 

    The time span over which data are collected may extend from several months to one or more years, depending on the time frame established for meeting the goals to be evaluated.Du Hamel and colleagues (2011) exemplify Abruzzese's definition of total program evaluation in their description of a 14-week medical-surgical nursing certification review course they conducted. 

    The evaluators focused on long-term goals of advancing professional development and changing professional practice as well as the goal of improving patient outcomes. Evaluative data collected over several years included more traditional measures, such as the percentage of participating nurses who passed the certification examination, the evaluation of the use of practice tests, clinical examples to promote active learning, and analysis of comments made by participants describing their use of reflection and their desire to continue learning. 

    As noted by the authors, demonstration of effectiveness of continuing education is essential in the current climate of limited resources, and a focus that goes beyond knowledge obtained to address knowledge edge transferred into practice is imperative.

    Another example of a program evaluation consistent with Abruzzese's definition of total program evaluation is Rouse's (2011) use of Kirkpatrick's four level model to understand sively evaluate the effectiveness of health information management courses and programs. Rouse describes reaction (the first level) as addressing immediate reactions of the attendees to the setting, the instructor, the materials, and the learning activities. 

    What Abruzzese describes as a happiness index, Rouse labels a “smile sheet,” commenting that although satisfaction does not imply learning, dissatisfaction may prevent it. Kirkpatrick's second, third, and fourth levels are learning, behavior, and results, respectively. Rouse describes these levels, in turn, as evaluation of knowledge immediately after education is completed, evaluation of whether actual change has occurred in the workplace, and systemwide evaluation of the impact of the program. 

    Kirkpatrick's levels of program evaluation closely match Abruzzese's model. Nurse educators can find clinical examples of how different types of evaluation included in Abruzzese's RSA model relate to one another in Haggard's (1989) description of three dimensions in evaluating teaching effectiveness for the patient and in Rankin and Stallings's (2005) four levels of evaluation of patient learning. 

    The three dimensions described by Haggard and the four levels identified by Rankin and Stallings are consistent with and can be compared to the basic types of evaluation included in the RSA model.As depicted in this table, models developed from an education theory base, such as the RSA model, have much in common with models developed from a patient care theory base, such as the two models put forth by Haggard and Rankin and Stallings. 

    At least one important point about the difference between the RSA and other models needs to be mentioned, however. That difference is depicted in the learner evaluation model.This learner focused model emphasizes the continuum of learner participation determined from needs assessment to learner performance over time once an adequate level of participation has been regained or achieved. This model and the RSA model have value in focusing and planning any type of evaluation but are especially important for impact and program evaluations.

Post a Comment

0Comments

Give your opinion if have any.

Post a Comment (0)

#buttons=(Ok, Go it!) #days=(20)

Our website uses cookies to enhance your experience. Check Now
Ok, Go it!