The Research Critique or Evaluating Research Report

Afza.Malik GDA
0

Evaluation of Research Report

The Research Critique or Evaluating Research Report

 The Research Critique or Evaluating Research Report is formulate by internal validity or Colaizzi's method or Giorgi's method in the form of dimensions e.g methodological theoretical.

 Whats is Research Report

    Nursing practice can be based on solid evidence only if research reports are critically appraised. Consumers sometimes think that if a report was accepted for publication, the study must be sound. Unfortunately, this is not the case. Indeed, most research has limitations and weaknesses. 

    Although disciplined research is the best possible means of answering many questions, no single study can provide conclusive evidence. Rather, evidence is accumulated through the conduct and evaluation of several studies addressing the Evaluating Research Reports same or a similar research question. Consumers who can do reflective and thorough critiques of research reports also play a role in advancing nursing knowledge.

Critical Research Decisions

    Although no single study is infallible, there is a tremendous range in study quality from nearly worthless to exemplary. Study quality is closely tied to the decisions researchers make in conceptualizing, designing, and executing studies and in interpreting and communicating results. Each study has its own particular flaws because each researcher, in addressing the same or a similar question, makes different decisions about how the study should be done. It is not uncommon for researchers who have made different decisions to arrive at different answers to the same question.     

    It is precisely for this reason that consumers must be knowledgeable about the research process. As a consumer of research reports, you must be able to evaluate researchers' decisions so that you can determine how much faith to put in their conclusions. You must ask, what other approaches could have been adopted, and, if adopted, would the results have been more reliable, believed, or replicable? In other words, you must evaluate the impact of researchers' decisions on a study's ability to reveal the truth

Purpose of Research Critiques

    Research reports are evaluated for a variety of purposes. Students are often asked to prepare critiques to demonstrate their methodological skills. Seasoned researchers are sometimes asked to write critiques of manuscripts to help journal editors make publication decisions or to accompany reports as published commentaries*; they may also be asked to present an oral critique if they are invited as discussants of a paper at a professional conference. 

    Journal clubs in clinical settings may meet periodically to critique and discuss research studies. And, perhaps most important, critiquing individual studies plays a role in assembling evidence into integrative reviews of the literature on a topic. For all these purposes, the goal is to develop a balanced evaluation of a study's contribution to knowledge. Research critiques are not just reviews or summaries of a study; Rather, they are thoughtful, critical appraisals of the strengths and limitations of a piece of research.     

    A written critique should serve as a guide to researchers, to editors, or to practitioners. Critiques should offer guidance about ways in which study results may have been compromised, and should provide guidance about alternative research strategies to address the research question. Critiques should thus help to advance a particular area of knowledge. The function of critical evaluations of nursing studies is not to hunt for and expose mistakes. 

    A good critique objectively and critically identifies adequacies and inadequacies, virtues as well as faults. Sometimes the need for such balance is obscured by the terms critique and critical appraisal, which connote unfavorable observations. The merits of a study are as important as its limitations in coming to conclusions about the worth of its findings. Therefore, research critics should reflect a balanced consideration of a study's validity and significance. Box 26-1 summarizes general guidelines to consider in preparing a written research critique. In the section that follows, we present some specific

Elements of a Research Critique

    Research reports have several important dimensions that may need to be considered in a critical evaluation of a study's worth. These include the substantive/theoretical, methodologic, interpretive, ethical, and presentational/stylistic dimensions of a study. It should be noted, however, that some critiques such as those prepared for journal editors are likely to focus primarily on substantive and methodologic issues.

Substantives and Theoretical Dimensions

    Readers of a research report need to determine whether a study was worthy in terms of the significance of the problem, the soundness of the conceptualizations, and the appropriateness of the conceptual framework. The research problem should have clear relevance to some aspect of nursing. Thus, even before you learn how a study was done, you should evaluate whether the study should have been conducted. Of course, your own disciplinary orientation should not bear on an objective evaluation of the study's significance. 

    A clinical nurse might not be intrigued by a study focusing on determinants of nursing turnover, but a nursing administrator trying to improve staffing decisions might find such a study useful. It is important, then, not to adopt a myopic view of a study's importance and relevance. Many problems relevant to nursing are still not necessarily worthy of a new study. You must ask a question such as, given what we know about this topic, is this research the right next step? Knowledge tends to be incremental. 

    Researchers must consider how to advance knowledge on a topic in beneficial ways. They should avoid unnecessary replications of a study once a body of research clearly points to an answer, but they should also not leap several steps ahead when there is an insecure foundation. Sometimes, replication is exactly what is needed to enhance the credibility or generalizability of earlier findings. An issue that has both substantive and methodological implications is the congruence between the study question and the methods used to address it. 

    There must be a good fit between the research problem on the one hand and the overall study design, the method of collecting research data, and the approach to analyzing those data on the other. Questions that deal with poorly understood phenomena, with processes, with the dynamics of a situation, or with in-depth description, for example, are usually best addressed with flexible designs, unstructured methods of data collection, and qualitative analysis. 

    Questions that involve the measurement of well -defined variables, cause-and-effect relationships, or the effectiveness of a specific intervention, however, are usually better suited to more structured, quantitative approaches using designs that offer control over the research situation. Another issue is whether researchers have appropriately placed the research problem into a larger context, in terms of prior research and a conceptual framework. As we emphasized, researchers do little to enhance the value of a study if the connection between the research problem and a conceptual framework is contrived. 

    However, a specific research problem that is genuinely framed as a part of some larger intellectual issue can usually advance knowledge more than an inquiry that ignores its theoretical underpinnings. Even though qualitative studies are not viewed through the lens of a theoretical/conceptual framework, researchers should still address the philosophical or theoretical underpinnings of the chosen research tradition for example, symbolic interaction should be addressed in a grounded theory study. 

    The substantive and theoretical dimensions of a study are normally communicated in a report's introduction. The manner in which introductory materials are presented is vital to the proper understanding and appreciation of what researchers have accomplished.

Methodology Dimensions

    Once a research problem has been identified, researchers make a number of important decisions about how to go about answering the research questions or testing the hypotheses. It is your job as a critique to evaluate those decisions and their consequences. In fact, the heart of a research critique lies in the analysis of the methodologic decisions adopted. Although researchers make hundreds of decisions about the methods for conducting a study, there are some that are more critical than others. In a quantitative study, the four major decision points on which you should focus critical attention are as follows:

• Decision 1, Design: What design will yield the most unambiguous and meaningful (internally valid) results about the relationship between the independent variable and dependent variable, or the most valid descriptions of concepts under study? What extraneous variables need to be controlled, and how best can this be accomplished?

• Decision 2, Sample: Who should participate in the study? What are the characteristics of the population to which the findings should be generalized (external validity)? How large should the sample be, from where participants should be recruited, and what sampling approach should be used?

• Decision 3, Data collection: What method should be used to collect the data? How can the variables be operationalized and reliably and validly measured?

• Decision 4, Data analysis: What statistical analyzes will provide the most appropriate tests of the research hypotheses or answers to the research questions? In a quantitative study, these methodological decisions are typically made up-front, and researchers then execute the prespecified plan. Qualitative researchers also make some up-front decisions, such as the research tradition that best matches the research question, (eg, grounded theory versus phenomenology).

    Another up-front methodological decision in qualitative research is the data analysis approach to be used. For example, in a phenomenological study, researchers usually prespecify whether the data will be analyzed by, say, Colaizzi's method or Giorgi's method. Ongoing methodologic decisions do occur while collecting and analyzing qualitative data, however. In qualitative studies, the major methodological decisions you should consider in your critique are as follows:

• Decision 1, Design: Which research tradition best matches the research question?

• Decision 2, Setting and study participants: What setting will yield the richest information about the phenomenon under study? Who should participate, and how can participants be selected to enhance the study's theoretical richness? How many participants are needed to achieve data saturation?

• Decision 3, Data sources: What should the sources of data be, and how should data be gathered? Should multiple sources of data (eg, unstructured interviews and observations) be used to achieve method triangulation?

• Decision 4, Data analysis: What data analysis techniques are appropriate for the research tradition?

• Decision 5, Quality enhancement: What types of evidence can be obtained to support the credibility, transferability, dependability, and confirmability of the data, the analysis, and the interpretation? Because of practical constraints, studies almost always entail compromises between what is ideal and what is feasible.

Ethical Dimensions

    In performing a research critique, you may (depending on the purpose of the critique) consider whether the rights of human subjects were violated during the investigation. If there are any potential ethical concerns, you need to consider the impact of those problems on the scientific merit of the study on the one hand and on participants' wellbeing on the other. 

    There are two main types of ethical transitions in research studies. The first consists of inadvertent actions or activities that researchers did not interpret as creating an ethical dilemma. For example, in one study that examined married couples' experiences with sexually transmitted diseases, the researcher asked husbands and wives to complete, privately, self-administered questionnaires. The researcher offered to mail back copies of the questionnaires to couples who wanted an opportunity to discuss their responses. 

    Although this offer was viewed as an opportunity to enhance couple communication, it is problematic. Some subjects may have felt compelled to say, under spousal pressure, that they wanted to share their responses, when, in fact, they wanted to keep their answers private. The use of the mail to return these sensitive completed questionnaires was also questionable. In this case, the ethical problem was inadvertent and could easily be resolved (eg, the researcher could give out blank copies of the questionnaire for couples to go over together). 

    In other cases, researchers might be aware of having committed some violation of ethical principles, but made a conscious decision that the violation was modest in relation to the knowledge that could be gained by doing the study in a certain way. For example, a researcher may decide not to obtain informed consent from the parents of minor children attending a family planning clinic because to require such consent would probably dramatically reduce the number of minors willing to participate in the research and would lead to a biased sample of clinic users; it could also violate the minors' right to confidential treatment at the clinic.

Interpretive Dimensions

    Research reports usually conclude with a Discussion, Conclusions, or Implications section. It is in the final section that researchers attempt to make sense of the analyses, to consider whether the findings support or fail to support hypotheses or theory, and to discuss what the findings imply for nursing. Inevitably, discussion sections are more subjective than other sections of a report, but they must also be based on a careful consideration of the evidence. In an appraisal of a report, unwarranted interpretations are fair game for criticism.     

    As a reviewer, you should be somewhat wary if a discussion section fails to point out any study limitations. Researchers are themselves in the best position to detect and assess the impact of sampling deficiencies, data quality problems, and so on, and it is a professional responsibility to alert readers to such issues. Moreover, when researchers note methodological shortcomings, readers have some assurance that these limitations were considered in interpreting the rest of the course, researchers are unlikely to note all relevant shortcomings of their own work. 

    The inclusion of comments about study limitations in the discussion section, although important, does not relieve you of the responsibility of appraising methodologic decisions. Your task as a reviewer is generally to contrast your own interpretation with that of the researcher and to challenge conclusions that do not appear to be supported by the results. If your objective reading of the research methods and study findings leads to an interpretation that is notably different from that of the researcher, then the study's interpretive dimension may be faulty. 

    It is often difficult to determine the validity of researchers' interpretations in qualitative studies To help readers understand the lens through which they interpreted their data, qualitative researchers should make note of whether they (1) kept field notes or a journal of their actions and emotions during the investigation, (2) discussed their own behavior and experiences in relation to the participants' experiences, and (3) acknowledged any effects of their presence on the nature of the data collected. Reviewers should look for such information in critiquing qualitative reports.

    In addition to contrasting your interpretation with that of the researcher (when this is possible), your critique should also draw conclusions about the stated implications of the study. Some researchers make rather grandiose claims or offer unfounded recommendations on the basis of modest results. When vital pieces of information are missing, researchers leave readers little choice but to assume the worst because this would lead to the most cautious interpretation of the results. 

    The writing in a research report, as in any published document, should be clear, grammatical, concise, and well organized. Unnecessary jargon should be minimized, but colloquialisms usually should be avoided. Inadequate organization is another presentation flaw in some research reports. Continuity and logical thematic development are critical to good communication of scientific information, but these qualities are often difficult to attain. Styles of writing do differ for qualitative and quantitative reports, and it is unreasonable to apply the standards considered appropriate for one paradigm to the other. 

    Quantitative research reports are typically written in a more formal, impersonal fashion, using either the third person or passive voice to connote objectivity. Qualitative studies are likely to be written in a more literary style, using the first or second person and active voice to connote proximity and intimacy with the data and the phenomenon under study. 

    Regardless of style, however, you should, as a reviewer, be alert to indications of overt biases, unwarranted exaggerations, emotionally laden comments, or melodramatic language. In summary, the research report is meant to be an account of how and why a problem was studied and what results were obtained. The report should be clearly written, cogent, and concise, and written in a manner that piques readers' interest and curiosity.

Conclusion or Results

    In concluding this topic, several points about research critiques should be made. It should be apparent to those who have glanced through the questions in the boxes in this chapter that it will not always be possible to answer all questions satisfactorily. This is especially true for reports published as journal articles, in which the need for economy often translates into compressed methodologic descriptions. Furthermore, there are many questions listed that may have little or no relevance for particular studies. 

    The inclusion of a question in the list does not necessarily imply that all reports should have all components mentioned. The questions are meant to suggest aspects of a study that often are deserving of consideration; they are not meant to lay traps for identifying omitted and perhaps unnecessary details. It must be admitted that the answers to many questions will call on your judgment as much as, or even more than, your knowledge. 

    An evaluation of whether the most appropriate data collection procedure was used for the research problem necessarily involves a degree of subjectivity. Issues concerning the appropriateness of various strategies and techniques are topics about which even experts disagree. You should strive to be as objective as possible and to indicate your reasoning for the judgments made. One final note is that there is increasing interest in using quantitative methods to rate or score methodological quality in individual studies, for the purposes of doing integrative reviews and meta analyses.

Post a Comment

0Comments

Give your opinion if have any.

Post a Comment (0)

#buttons=(Ok, Go it!) #days=(20)

Our website uses cookies to enhance your experience. Check Now
Ok, Go it!