Data Evaluation Methods In Nursing Education

Afza.Malik GDA
0

Evaluation Data Methods In Nursing Education

Data Evaluation Methods In Nursing Education


 Evaluation Methods Design and Structure,Types of Data to Collection In Nursing Education,Organizing and Analyzing Data Statistical Test,What Data to Collect and from Whom,How, When, and Where to Collect Data,Who Collects Data.

Evaluation Methods Design and Structure

    The focus of evaluation determines the evaluation design structure. The design structure, in turn, provides the basis for determining what evaluation methods should be used to collect data. Answers to the following questions can assist in selecting the most appropriate, feasible methods when conducting a particular evaluation in a particular setting and for a specific purpose:

Which types of data will be collected? 

What data will be collected and from whom? 

How, when, and where will data be collected?

 Who will collect the data?

Types of Data to Collection In Nursing Education

    Evaluation of healthcare education includes collecting data about people, about the educational program or activity, and about the environment in which the educational activity takes place. Data about all three of these aspects are required for process, outcome, impact, and program evaluations. Content evaluations may be limited to data about the people and the program, although this limitation is not necessary. 

    Types of data that are collected about people can be classified as demographic (eg, age, gender, health status) as well as cognitive, affective, or psychomotor behaviors. The types of data that are collected about educational activities or programs generally include such factors as cost, length, number of educators required, teaching learning methods used, amount and type of materials required, and so on. 

    The types of data that are collected about the environment in which a program or activity is conducted generally include such characteristics as temperature, lighting, location, layout, space, and noise level.Given the possibility that an unlimited and overwhelming amount of data could be collected, how do you decide which data should be gathered? The most straightforward answer to this question is that you should collect data that will answer the questions that were asked when deciding the evaluation focus. 

    The likelihood that the evaluator will collect the right amount of the right type of data to answer evaluation questions can be significantly improved by:

(1) remembering that any data collected must be used.

(2) using operational definitions allows everyone who is involved to understand what is being evaluated.

    An operational definition must clearly define one or more words or phrases being used and must be written in measurement terms. Functional health status, for example, can be theoretically defined as an individual's ability to independently carry out activities of daily living without self-perceived undue difficulty or discomfort. 

    Functional health status can be operationally defined as an individual's composite score on the SF-36 (Short Form 36-item) survey instrument (Stewart, Hays, & Ware, 1988; Ware, Davies Avery, & Donald, 1978). 

    The SF-36, which has undergone years of extensive reliability and validity testing with a wide variety of patient populations and in several languages, is generally considered the gold standard for measuring functional health status from the individual's perspective, Continuously updated information about the SF-36 as well as different versions of the actual instrument, including a version for children, can be found online.

    As another example, patient compliance can be theoretically defined as the patient's regular and consistent adherence to a prescribed treatment regimen. For use in outcome evaluation of a specific educational activity, patient compliance might be operationally defined as the patient's demonstration of unassisted and error-free completion of all steps in the sterile dressing change as observed in the patient's home on three separate occasions at 2-week time intervals. 

    These examples show that an operational definition states exactly which data will be collected. In the first example, measurement of functional health status requires collection of patient survey data using a specific self administered questionnaire. 

    The second example provides even more information about data collection than does the first, by including where and how many times the patient's performance of the dressing change is to be observed, as well as stating that criteria for compliance include both unassisted and error free performance on every occasion. 

Organizing and Analyzing Data Statistical Test

    In addition to data being categorized as describing people, programs, or the environment, data also can be categorized as quantitative or qualitative. Quantitative data are expressed in numbers and generally are stated as statistics, such as the frequency, mean, median, ratio, F statistic, f statistic, or chi-square. Numbers can be used to answer such questions as how much, how many, how often, and so on, in terms that are commonly understood by the audience for the evaluation. 

    Mathematical analysis of data can, for example, demonstrate with some level of precision and reliability whether a learner's knowledge or skill has changed since completing an educational program, or how much improvement in a learner's knowledge or skill is the result of an educational program. 

    Qualitative data, on the other hand, include feelings, behaviors, and words or phrases that generally are summarized into themes or categories. Such data also can be described in quantitative terms, such as percentages or counts, but this transformation eliminates the richness and insight into the responses expressed by individuals about their experiences. Qualitative data also can be used as background to better interpret quantitative data, especially if the evaluation is intended to measure such value laden or conceptual terms as “satisfaction” or “quality”. 

    Any evaluation may be strengthened by collecting both quantitative and qualitative data. For example, an evaluation to determine whether a stress reduction class resulted in decreased work stress for participants could include participants' qualitative expressions of how stressed they feel plus quantitative data, such as pulse and blood pressure readings. 

    Although intuitively appealing to want to collect both quantitative and qualitative data, it is resource intensive to do so, such that evaluators must be certain that the focus of the evaluation justifies the decision to collect both types of data. 

What Data to Collect and from Whom 

    Data can be collected directly from the individuals whose behavior or knowledge is being evaluated, from family caregivers or significant others as representatives of these individuals, or from documents or databases that have already been created. possible, researchers should whenever plan to collect at least some data directly from the individuals being evaluated. In the case of process evaluation, data should be collected from all learners and all educators participating in the educational activity. 

    Content and outcome evaluations should include data from all learners at the completion of one or more educational activities.Because impact and total program evaluations have a broader scope than do process, content, and outcome evaluations, collecting data from all individuals who participate in an educational program over an extended time may be impossible. 

    This difficulty arises because data collectors may not be able to locate every participant or they may lack sufficient resources to gather data from such large numbers of people.When all participants cannot be counted or located, data may be collected from a sample (subset) of participants who are considered to represent the entire group. 

    If an evaluation is planned to collect data from a sample of participants, it should include representatives of the entire group. A random selection of participants from whom data are collected can minimize bias in the sample but cannot guarantee representative.For example, an impact evaluation was conducted to determine whether a 5-year program supporting home-based health education improved the general health status of individuals in the community served by the program. 

    If all random sample of community members could be generated by first listing and numbering all members' names, and then drawing numbers using a random numbers table until a 10% sample is obtained. Such a method for selecting the sample of community members would eliminate intentional selection of those individuals who were the most active program participants and who might therefore have a better health status than does the entire community. 

    At the same time, the 10% random sample could unintentionally include only those individuals who did not participate in the health education program. Data collected from this sample of non participants would be just as misleading as data collected from the first sample. A more representative sample for this evaluation should include both participants and nonparticipants, ideally in the same proportions in the sample as in the community. 

    Preexisting data should be used as a source of evaluative data only if the purpose for which they were collected mirrors the purpose of the evaluation currently being considered, if operational definitions are the same and if the same population of interest is the focus of both past and current evaluations. Data already in existence generally are less expensive to obtain and available sooner than are original data. 

How, When, and Where to Collect Data

Methods for how data can be collected include the following:

  • Observation
  • Interview
  • Questionnaire or written examination 
  • Record review
  • Secondary analysis of existing databases

    Which method is selected depends, first, on the type of data being collected and, second, on the available resources. When possible, data should be collected using more than one method. Using multiple methods provides the evaluator, and consequently the primary audience, with more complete information about the program or performance being evaluated than could be accomplished using a single method. 

    For example, the nurse teaching patients might use both observation and teach back to determine whether a family caregiver can correctly perform a dressing change and explain why each step of the dressing change is important (Visiting Nurse Associations of America , 2012). 

    The evaluator can conduct observations in person or can videotape them for viewing at some later time. In the combined role of educator evaluator, the nurse educator who is conducting a process evaluation can directly observe a learner's physical, verbal, psychomotor, and affective behaviors to respond to them in a timely manner. Using videotape or a non participant observer also can be beneficial for picking up the educator's own behaviors, which the educator is unaware of but might be influencing the learner. 

    The timing of data collection, or when data collection takes place, has already been addressed both in discussion of different types of evaluation and in descriptions of evaluation design structures. Process evaluation, for example, generally occurs during an educational activity. Content evaluation takes place immediately after completion of education. 

    Outcome evaluation occurs sometime after completion of education, when learners have returned to the setting where they are expected to use new knowledge or perform a new skill. Impact evaluation generally is conducted from weeks to years after the educational program, because its purpose is to determine which change has occurred within the community or institution resulting from an educational intervention. 

    The timing of data collection for program evaluation is less obvious than for other types of evaluation, in part because different descriptions of what constitutes a program evaluation can be found both in the literature and in practice. As discussed earlier, Abruzzese (1992) describes data collected for program evaluation as occurring over a prolonged period because program evaluation is itself the culmination of process, content, outcome, and impact evaluations already conducted. 

    Where an evaluation is conducted can have a major effect on evaluation results. Those conducting an evaluation must be careful not to make the decision about where to collect data based on convenience for the data collector. For example, an appropriate setting for conducting a content evaluation may be in the classroom or skills laboratory where learners have just completed class instruction or training. 

    An outcome evaluation to determine whether training has improved the nurse's ability to perform a skill with patients on the nursing unit, however, requires that data collection in this case, observation of the nurse's performance-be conducted on the nursing unit. 

    As another example, an outcome evaluation to determine whether discharge teaching in the hospital enabled the patient to provide self-care at home requires that data collection, or observation of the patient's performance, be conducted in the home. What if available resources are insufficient to allow for home visits by the evaluator? 

    To answer this question, keep in mind that the focus of the evaluation is on performance by the patient-not performance by the evaluator. Training a family member, a visiting nurse, or even the patient to observe and record patient performance at home is preferable to bringing the patient to a place of convenience for the evaluator.

Who Collects Data

    The educator conducting the class or activity being evaluated commonly collects evaluation data because of already being present and interacting with learners. Combining the role of evaluator with that of educator is an appropriate method for conducting a process evaluation because evaluative data are integral to the teaching learning process. 

    Inviting another educator or a patient representative to observe a class can provide additional data from the perspective of someone who does not have to divide his or her attention between teaching and evaluating. This second, and perhaps less biased, input can strengthen the legitimacy and usefulness of the evaluation results. 

    Also, data can be collected by the learners themselves, by other colleagues within the department or institution, or by someone from outside the institution. Fairchild's (2012) description of a mixed methods evaluation of a service learning academic-practice partner-ship with rural hospitals provides an example of data collection that included faculty serving as coaches for students, students serving as educators coleaders of project teams, and hospital staff and administrators who were primary recipients of support. 

    Data collection included use of online surveys consisting of Likert scaled items and open-ended questions asking for narrative comments regarding strengths and areas for improvement of the partnership. 

    The individuals who are chosen to carry out data collection become an extension of the evaluation instrument. If the data collected are to be reliable, unbiased, and accurate, the data collectors must likewise be unbiased and sufficiently expert at the task. Use of unbiased expert data collectors is especially important for collecting observation and interview data because these data in part depend on the subjective interpretation by the data collector. 

    Also, data collectors can influence the information that is obtained in other ways. For example, if staff nurses are asked to complete a job satisfaction survey and their head nurse is asked to collect the surveys for return to the evaluator, which problems might occur? Might some staff nurses be hesitant to provide negative scores on certain items, even though they hold one or more negative opinions? Likewise, physiological data can be altered, however unintentionally, by the data collector. 

    For example, an outcome evaluation might be conducted to determine whether a series of biofeedback classes given to young executives can reduce stress as measured by pulse and blood pressure. How might some executives' pulse and blood pressure results be affected by a data collector who is physically attractive or overtly acting rushed or frustrated? Use of trained data collectors from an external agency is, in most cases, not a financially viable option. 

    The potential for a data collector to bias data can be minimized using less expansive alternatives, however. First, the number of data collectors should be limited as much as possible, because this step automatically decreases person based variation. Also, individuals assisting with data collection should wear similar conservative clothing and speak in a moderate tone. 

    Because moderate tone, for example, may not be interpreted the same way by everyone, at least one practice session or dry run should be held with all data collectors prior to conducting the evaluation. In addition, data collection should be conducted by someone who has no vested interest in the results and who will be perceived as unbiased and nonthreatening by those persons providing the data. 

    Furthermore, providing interview scripts to be read verbatim by the interviewer can ensure that all patients or staff being interviewed are asked the same questions.With the emphasis on continuous quality improvement (CQI) in healthcare organizations, nurses and other professionals are expected to become more knowledgeable about what data are needed and how to use measurement techniques to collect evidence in their work setting (The Joint Commission, 2017 ). 

    In response to the demand for measurable evidence to support healthcare decision making, the field of data analytics has grown exponentially. The enactment of the American Recovery and Reinvestment Act of 2009 and the introduction of incentives to promote meaningful use of health information technology and electronic health records (EHRs) has effectively provided nurse educators and other healthcare providers with a rich source of data that can be used to evaluate delivery of care and resulting patient outcomes (Centers for Medicare & Medicaid Services, 2017). 

    Although some benefits of EHRs are yet to be realized, many facilities already have staff available to help nurse educa tors extract data that are useful for evaluation.Use of a portfolio as a method for evaluation of an individual's learning over time has been documented in the literature for more than 35 years, primarily from an academic perspective (Bahreini, Moattari, Shahamat, Dobaradaran, & Ravanipour, 2013 ; Garrett, MacPhee, & Jackson, 2012; Hayes, 2007; Laux & Stoten, 2016). 

    Although formal education of nursing students is not the focus of this text, other uses of portfolios are relevant to the role of the practice based nurse as educator. Individual completion of a professional portfolio is a current requirement for recertification in some nursing specialties in the United States and for periodic registration in the United Kingdom (Morgan & Dyer, 2015). 

    Given the growing importance of a nurse's portfolio documentation for career advancement, the nurse educator may find several colleagues asking for assistance in creating and maintaining a portfolio that provides a strong base of evaluative evidence demonstrating that nurse's continuing professional development and consequent impact on practice (Chamblee, Dale, Drews, Spahis, & Hardin, 2015; Hespenheide, Cottingham, & Mueller, 2011; Oermann, 2002; Schneider, 2016). 

    Perhaps the best suggestion the nurse educator might offer and heed is to clarify the focus of the portfolio as determined by the requiring organization (in this case, the primary audience) as stated in that organization's criteria for portfolio completion. Is the focus more on process evaluation, on out-come evaluation, or on both? Specifically, is the nurse expected to demonstrate reflective practice? If so, what does the organization accept as evidence of reflective practice? 

    One reason focus clarification is so challenging because there is no consistent description of how portfolios are to be used or what they are to contain. In its simplest form, a practice portfolio is composed of a collection of information and materials about one's practice that have been gathered over time. The issue of whether this collection is intended to demonstrate previous learning or whether the process of collecting is itself a learning experience continues to fos ter debate (Fitch, Peet, Reed, & Tolman, 2008). 

    Central to this issue is the notion of reflective practice. First coined by Schön (1987), the term reflective practice still does not havea commonly agreed-upon definition (Cotton, 2001; Epstein, 2008; Lavoie, Pepin, & Cossette, 2017; Morgan, 2009; Wainwright, Shepard, Harman , & Stephens, 2010). Noveletsky (2007) offers one definition of reflective practice “as the process of exposing the contradictions in practice... [when] the health professional must first come to understand what he or she defines as ideal practice” (p. 141). 

    Lavoie et al. (2017) describe reflection as a process that helps the individual-in this case the new nurse graduate- “to understand the meaning of a problematic situation, which is the relationship between causes, actions, and consequences” (p. 52) so they might improve future observations and actions in similar situations. Schön (1987) describes two key components of reflective practice as reflection in action and reflection on action. 

    Reflection in action occurs when the nurse introspectively considers a practical activity while performing it so that change for improvement can be made at that moment. In contrast, reflection on action occurs when the nurse introspectively Analyzes a practical activity after its completion to gain insights for the future (Cotton, 2001). From an evaluation perspective, these components are similar in meaning to formative and summative evaluation, indicating that reflective practice has more than one focus (Paschal, Jensen, & Mostrom, 2002).

Post a Comment

0Comments

Give your opinion if have any.

Post a Comment (0)

#buttons=(Ok, Go it!) #days=(20)

Our website uses cookies to enhance your experience. Check Now
Ok, Go it!