Evaluation and Evaluation Research Structure In Nursing Education

Afza.Malik GDA
0

Structure of Evaluation and Evaluation Research In Nursing Education

Evaluation and Evaluation Research Structure In Nursing Education


Designing the Evaluation  In Nursing Education,Structural Design of Nursing Evaluation,Evaluation Versus Evaluation Research In Nursing Education,Differences In Evaluation and Evaluation Research.

Designing the Evaluation In Nursing Education

    Nurse educators can design an evaluation within the framework, or boundaries, already established by focusing the evaluation appropriately. In other words, the design must be consistent with the purpose, questions, and scope of the evaluation and must be realistic given the available resources. 

    Evaluation design includes at least three interrelated components: structure, methods, and instruments,adelphia, M: Lippincott Williams & Wilkins; Rouse, D. (2011),Employing Kirkpatrick's evaluation work in death information Management.

Structural Design of Nursing Evaluation

    An important question to be answered in signing an evaluation is “How detailed should the evaluation be?” The obvious answer is that all evaluations should have some level of rigor, which means they must be precise, exact, and logically organized. In other words, all evaluations should be systematic, carefully and thoroughly planned or structured before they are conducted. 

    How rigorous the design structure must be depends on the questions to be answered, how complex the scope of the evaluation is, and how evaluation results will be used. The more the questions address cause and effect, the more complex the scope of the evaluation. Likewise, the more critical and broad reaching the expected use of results, the more the evaluation design should be structured from a research perspective. 

Evaluation Versus Evaluation Research In Nursing Education

    Evaluation and research are not synonymous, but they are related activities. Traditionally, the primary difference between the two has been related to the purpose for conducting the evaluation or the study. The purpose of an evaluation is to measure whether a practice change is effective in a specific setting with a specific group of individuals-learners and/or teachers, in the case of education evaluation during a specified time frame. 

    In contrast, the purpose of research is to generate new knowledge that can be used across settings and individuals with similar characteristics and demographics. As described earlier in this chapter, this distinction between research and evaluation is analogous to the distinction between external and internal evidence (Melnyk & Fineout Overholt, 2015). 

    Differences between evaluation and evaluation research have become less distinct over the past several years with the explosion of what was once called applied research into a myriad of research types that include comparative effectiveness, transnational, dissemination, implementation, and so we. 

    What these types of research commonly share is that all are intended to measure change in the “real world” setting as opposed to the tightly controlled setting of traditional randomized, placebo controlled trials. 

    Participatory action research is perhaps the best example of research in which the “real world.” with all its inherent complexity and confounding variance, is combined with rigor. Froggatt and Hockley (2011) present two participatory action research studies to illustrate how evaluation fits within this type of research. 

    More recently, mixed methods research, which usually includes both qualitative and quantitative data and shares several characteristics with participatory action research, has emerged as a frequent design for conducting evaluation research (Marks Maran, 2015; Phillips et al. , 2016). 

    It should be noted that every example of process, content, outcome, impact, and total program evaluation published since 2007 as well as many reports published before that year and included was conducted as evaluation research, which is one type of applied research. Of course, not all outcome, impact, and program evaluations should be conducted as research studies. Some important differences do exist between evaluation and evaluation research. 

    One of the most significant relates to the influence of a primary audience. As discussed earlier, the primary audience meaning the individual or group requesting the evaluation is a major component to be considered in focusing an evaluation. The evaluator must design and conduct the evaluation consistent with the purpose and related questions identified by the primary audience. 

    Evaluation research, by contrast, does not have an identified primary audience. Consequently, researchers have the autonomy to develop a protocol to answer one or more questions posed by them. A second difference between evaluation and evaluation research is related to timing. 

    The necessary timeline for usability of evaluation results may not be sufficient to prospectively develop a research proposal and obtain institutional review board approval prior to beginning data collection. 

Differences In Evaluation and Evaluation Research

    Given the differences between evaluation and evaluation research, how are decisions about level of rigor of an evaluation translated into an evaluation structure? The structure of an evaluation design depicts the number of groups to be included in the evaluation, the number of evaluations or periods of evaluation, and the time sequence between an educational intervention and evaluation of that intervention. 

    A group can include one individual, as in the case of one-to-one nurse-patient teaching, or several individuals, as in the case of a nursing in service program or workshop.A process evaluation might be conducted during a single patient education activity where the nurse observes patient behavior during instruction/demonstration and engages the patient in a question-and-answer exchange upon completion of each new instruction. 

    Because the purpose of process evaluation is to facilitate better learning while that learning is happening, education and evaluation occur at the same time in this case.Evaluation also may be conducted immediately after an educational intervention. This structure is probably the most commonly employed in conducting educational evaluations, although it is not necessarily the most appropriate. 

    If the purpose of conducting the evaluation is to determine whether learners who have just completed a class know specific content that they did not know before attending that class, then a structure that begins with collection of baseline data is more appropriate. 

    Collection of baseline data via a pretest, which can be compared with data collected via a post test at one or more points in time after learners have completed the educational activity, provides an opportunity to measure whether change has really occurred. 

    The ability to measure change in a certain skill or level of know-how, for example, also requires that the same instruments be used for pretest and post test data collection at both points in time. 

    If the purpose of conducting an evaluation is to determine whether learners know content or can perform a skill resulting from an educational intervention, the most appropriate structure will include at least two groups: one receiving the new educational intervention and one receiving the usual education or standard of care. Both groups are evaluated at the same time, even though only one group is exposed to the new education. 

    The group receiving the new education program is called the treatment or experimental group, and the group receiving standard care or the traditional education program is called the comparison or control group. The two groups may or may not be equivalent. Equivalent groups are those with no known differences between them prior to some intervention, whereas non equivalent groups may be different from one another in several ways. 

    For example, patients on nursing unit A may receive an educational pamphlet to read prior to attending a class, whereas patients on nursing unit B may attend the class without first reading the pamphlet. Because patients on the two units probably are different in many ways (eg, in terms of age and diagnosis), they would be considered nonequivalent groups. 

    Use of the term nonequivalent is commonly encountered in discussions of traditional research designs. Quasi experimental designs, such as nonequivalent control group designs, should be among those considered in planning an out- come, impact, or program evaluation. 

    Especially if the purpose of an evaluation is to demonstrate that an education program caused fewer patient returns to the clinic or fewer nurses to leave the institution, for example, the evaluation structure must have the rigor of evaluation research.Another type of quasi experimental design, called a time series design, might include only one group of learners from whom evaluative data are collected at several points. in time, both before and after receiving an educational intervention. 

    If data collected before the intervention consistently demonstrate lack of learner ability to comply with a treatment regimen, whereas data collected after the intervention consistently demonstrate a significant improvement in patient compliance with that regimen, the evaluator could argue that the education intervention was the reason for the improvement in this case. 

    As noted previously, mixed-methods designs, once called pluralistic designs, are appearing more frequently in the literature as approaches especially suited for evaluation of projects that have a community base, that include participants from diverse settings or perspectives, or that require both program processes and outcomes to be included in the evaluation (Allen et al. 2012; Balas et al., 2013; Zhang & Cheng, 2012). 

    Because these designs often are comprehensive. resource intensive, and long term in nature, they are most appropriate for program evaluation.The literature on evaluation of nursing staff education and patient education has become an increasingly rich source of examples of how to conduct rigorous evaluation.

    A literature search that includes many of the following journals is recommended for planning evaluation of healthcare education in a cost conscious and outcome focused health-care environment: Canadian Journal of Nursing Research, Clinical Effectiveness in Nursing, Evaluation & the Health Professions , Evidence Based Nursing, Health Education Research, Journal of Advanced Nursing. Journal of Continuing Education in Nursing, Journal of Nursing Staff Development, Nurse Educator, Journal of Nursing Education, Nursing Research, Research in Nursing & Health, Worldviews on Evidence Based Nursing, and Implementation Science.

Post a Comment

0Comments

Give your opinion if have any.

Post a Comment (0)

#buttons=(Ok, Go it!) #days=(20)

Our website uses cookies to enhance your experience. Check Now
Ok, Go it!