Conducting the Evaluation and Data Collection In Nursing Education

Afza.Malik GDA
0

Data Collection and Conducting the Evaluation In Nursing Education

Conducting the Evaluation and Data Collection In Nursing Education


Implementation of Evaluation In Nursing Education, Analyzing and Interpreting Data Collected,Reporting Evaluation Results.

Implementation of Evaluation In Nursing Education

    How smoothly an evaluation is implemented depends primarily on how carefully and thoroughly that evaluation was planned and how carefully the instruments for data collection were selected or developed. However, these two factors alone do not always guarantee success.To minimize the effects of unexpected events that can occur when carrying out an evaluation, the following three methods are likely to add to a successful achievement of the process: 

1. Conduct a pilot test first. 

2. Include extra time to complete all the evaluation steps. 

3. Keep a sense of humor throughout the experience.

    Conducting a pilot test or trial run of the evaluation involves trying out the data collection methods, instruments, and plan for data analysis with a few individuals who are the same as or very similar to those who will be included in the full evaluation. 

    If the nurse educator plans to use any newly developed instruments for the evaluation, a pilot test must be conducted to assess the reliability, validity, interpret ability, and feasibility of those new instruments. Also, a pilot test should be carried out prior to implementing a full evaluation that is expected to be expensive or time consuming to conduct or on which major decisions will be based. 

    Process evaluation generally is not pilot tested unless a new instrument will be used for data collection, but pilot testing should be considered prior to conducting outcome, impact, or program evaluations. 

    Fields et al. (2016) conducted a pilot study for 1 year to test feasibility and acceptability of a multidisciplinary team education program to help patients with taste to manage their discomfort. Education included a nurse-taught curriculum, monthly phone calls from pharmacists, and patient knowledge exams at baseline, 6, and 12 months. 

    The curriculum was developed by a rheumatologist and a social worker. Both patients and providers completed evaluations of the program. Among 45 patients who were enrolled in the study, 42 remained enrolled at 6 months and 40 at 12 months. 

    After 1 year, 84.6% of patients rated the education as useful in understanding and managing their taste. 81% evaluated the education from nurses as helpful, and 50% evaluated the calls from pharmacists also as helpful.Including extra time while conducting an evaluation means leaving room for unexpected delays. 

    Almost invariably, more time is needed than anticipated for evaluation planning, data collection, analysis of evaluation results, and reporting the results that will be meaningful and useful to the primary audience. 

    Because delays not only will likely occur but also are likely to crop up at inconvenient times during the evaluation, keeping a sense of humor is vitally important. An evaluator with a sense of humor is more successful at maintaining a realistic perspective on the evaluation process as well as when reporting results that include negative findings. 

    However, it is important to acknowledge that an audience with a vested interest in positive evaluation results may blame the evaluator if results are less impressive than expected.

Analyzing and Interpreting Data Collected

    The purposes for conducting data analysis are twofold: 

(1) to organize data so that they can provide meaningful information

(2) to provide answers to evaluation questions 

    The terms data and information are not the same. Data, a mass of numbers or a mass of comments, do not become information until it has been organized into coherent tables, graphs, or categories that are relevant to the purpose for conducting the evaluation.Basic decisions about how data will be analyzed are dictated by the nature of the data and by the questions used to focus the evaluation. 

    As described earlier, data can be either quantitative or qualitative. Data also can be described as either continuous or discrete. Age and level of anxiety are examples of continuous data; gender. and diagnosis are examples of discrete data. 

    Finally, data can be differentiated by level of measurement. All qualitative data are at the nominal level of measurement, meaning they are described in terms of categories such as health focused versus illness focused. 

    Quantitative data, in contrast, can be at the nominal, ordinal, interval, or ratio level of measurement. The level of measurement of the data determines which statistics can be used to analyze those data. A useful suggestion for deciding how data will be analyzed is to enlist the assistance of someone with experience in data analysis. 

    Analysis of data should be consistent with the type of data collected. In other words, all data analysis must be rigorous, but not all data analysis needs to include use of inferential statistics. For example, qualitative data, such as verbal comments obtained during interviews and written comments obtained from open ended questionnaires, are summarized, or themed, into categories of similar comments. 

    Each category or theme is qualitatively described by directly quoting one or more comments that are typical of that category. These categories then may be quantitatively described using descriptive statistics such as total counts and percentages. 

    As noted earlier in this chapter, different qualitative methods for analyzing data continue to emerge as they gain traction and are recognized as legitimate in a scientific environment once ruled by traditional experimental quantitative methods. The method used should be consistent with the purpose for the evaluation, that is to provide information that the primary audience can use for decision making. 

    Descriptive content analysis of qualitative data can be used to add meaning to numerical results from quantitative analysis. For example, an average score of 3.5 on a Likert scale of 1 = strongly disagree to 5 = strongly agree can be interpreted to mean that the majority of the students or patients responding to a survey positively viewed their educational program. Results of content analysis of comments provide narrative details that enrich the meaning of the quantitative results. 

    The first step in analyzing quantitative data consists of organizing and summarizing the data using statistics, such as frequencies and percent-ages that describe the sample or population from which the data were collected. A description of learners in a sample and from a larger population, for example, might include such information as presented.

    The next step in analyzing quantitative data is to select the statistical procedures appropriate for the type of data collected to answer the questions asked during the planning phase of the evaluation. Again, nurse educators are encouraged to enlist the assistance of an expert statistician to assist with data analysis and interpretation.

Reporting Evaluation Results 

    Results of an evaluation must be reported if the evaluation is to be of any use. Such a statement seems obvious, but many times an evaluation is carried out, yet its results are never made public. Often people participate in an evaluation but never receive feedback or see the final report. How many times have nurses conducted an evaluation without sharing their findings? 

    Almost all health professionals, if they are honest. would have to answer that they have been guilty of this on more than one occasion.Reasons for not reporting evaluation results are diverse and numerous. The following are four major reasons why evaluation data never make the trip from the spreadsheet to the customer.

1. Ignorance of who should receive the results 

2. Belief that the results are not important or will not be used 

3. Lack of ability to translate findings into language useful in producing a final report 

4. Fear that results will be misused

    The following guidelines can significantly increase the likelihood that results of the evaluation will be reported to the appropriate individuals or groups, in a timely manner, and in usable form: 

Be audience focused

Stick to the evaluation purpose

Use data as intended

Be Audience Focused

    The purpose for conducting an evaluation is to provide information for decision making by the primary audience. The report of evaluation results, therefore, must be consistent with that purpose. One rule of thumb: Always begin an evaluation report with an executive summary or an abstract that is no longer than one page. No matter who the audience members are, their time is important to them and they want something succinct to read. 

    A second rule of thumb is to present evaluation results in a format and language that the audience can understand and use without additional interpretation. This statement means that in the body of the report, important information should be written using nontechnical terms. For example, graphs and charts generally are easier to understand than are tables of numbers. 

    If a secondary audience of technical experts also will receive the report of evaluation results, it should include an appendix containing the more detailed or technically specific information.A third rule of thumb is that the evaluator should make every effort to present results in person as well as in writing. 

    A direct presentation, which should include specific recommendations for how the evaluation results might be used, provides an opportunity for the evaluator to answer questions and to assess whether the report meets the needs of the audience. Giving specific recommendations may increase the likelihood that the results of evaluation currently will be used.

Post a Comment

0Comments

Give your opinion if have any.

Post a Comment (0)

#buttons=(Ok, Go it!) #days=(20)

Our website uses cookies to enhance your experience. Check Now
Ok, Go it!