Qualitive Research Analysis General Considerations & Tips

Afza.Malik GDA
0

Research Analysis and Considerations

Qualitive Research Analysis
 Qualitive Research Analysis General Considerations & Tips includes  Analysis Styles 
Qualitative Data Management and Organization Developing a and Categorization Scheme.

    Qualitative analysis is a labor-intensive activity that requires creativity, conceptual sensitivity, and sheer hard work. Qualitative analysis is more complex and arduous than quantitative analysis, in part because it is less formulaic. In this section, we discuss some general considerations relating to qualitative analysis.

Qualitative Analysis: General Considerations & Tips

    The purpose of both qualitative and quantitative data analysis is to organize, provide structure to, and elicit meaning from research data. In qualitative studies, however, data collection and data analysis usually occur simultaneously, rather than after all data are collected. The search for important themes and concepts begins from the moment data collection begins. Qualitative data analysis is a particularly challenging enterprise, for three major reasons. 

    First, there are no universal rules for analyzing and presenting qualitative data. The absence of standard analytical procedures makes it difficult to explain how to do such analyses, how to present findings in such a way that their validity is apparent, and how to replicate studies. The second challenge of qualitative analysis is the enormous amount of work required. Qualitative analysts must organize and make sense of pages and pages of narrative materials. 

    In a recent multimethod study by one of us , the qualitative data consisted of transcribed, unstructured interviews with over 100 low-income women discussing life stressors and health problems. The transcriptions ranged from 30 to 50 pages in length, resulting in more than 3000 pages that had to be read, reread, and then organized, integrated, and interpreted. 

    The final challenge comes in reducing data for reporting purposes. Quantitative results can often be summarized in two or three tables. Qualitative researchers, by contrast, must balance the need to be concise with the need to maintain the richness and evidentiary value of their data.

Analysis Styles

    Crabtree and Miller (1999) observed that there are nearly as many qualitative analysis strategies as there are qualitative researchers, but they identified three major analysis styles that fall along a continuum. At one end is a style that is more systematic and standardized, and at the other is a style that is more intuitive, subjective, and interpretive. The three prototypical styles are as follows:

• Template analysis style. In this style, researchers develop a template or analysis guide to which the narrative data are applied. The units for the template are typically behaviors, events, and linguistic expressions (eg, words or phrases). Although researchers begin with a rudimentary template before collecting data, the template undergoes constant revision as more data are gathered. 

    The analysis of the resulting data, once sorted according to the template , is interpretive and not statistical. This style is most likely to be adopted by researchers whose research tradition is ethnography, ethology, discourse analysis, and ethnoscience.

• Editing analysis style. Researchers using an editing style act as interpreters who read through the data in search of meaningful segments and units. Once segments are identified and reviewed, they develop a categorization scheme and corresponding codes that can be used to sort and organize nice the data. 

    The researchers then search for the patterns and structure that connect the thematic categories. Researchers whose research traditions are grounded theory, phenomenology, hermeneutics, and ethnomethodology use process during that fall within the editing analysis style.

• Immersion/crystallization style. This style involves the analyst's total immersion in and reflection of the text materials, resulting in an intuitive crystallization of the data. This highly interpretive and subjective style is exemplified in personal case reports of a semi anecdotal nature, and is encountered less frequently in the nursing research literature than the other two styles. 

    Researchers seldom use terms like template analysis style or editing style in research reports. These terms are primarily post hoc characterization of styles adopted by qualitative researchers. However, King (1998) has described the process of undertaking a template analysis, and his approach has been used in qualitative studies.

    The Qualitative Analysis Process The analysis of qualitative data is an active and interactive process, especially at the interpretive end of the analysis style continuum. Qualitative researchers typically scrutinize their data carefully and deliberatively, often reading the data over and over again in a search for meaning and deeper understanding. Insights and theories cannot emerge until researchers become completely familiar with their data. 

    Morse and Field (1995) note that quality Active analysis is a “process of fitting data together, of making the invisible obvious, of linking and attributing consequences to antecedents. It is a process of conjecture and verification, of correction and modification, of suggestion and defense”. Several intellectual processes play a role in qualitative analysis. Morse and Field (1995) have identified four such processes: 

1. Comprehending. Early in the analytical process, qualitative researchers strive to make sense of the data and to learn “what is going on.” When comprehension is achieved, they are able to prepare a thorough, rich description of the phenomenon under study, and new data do not add much to that description. Thus, comprehension is completed when saturation has been attained. 

2. Synthesizing. Synthesizing involves a “sifting ” of the data and putting pieces together. At this stage, researchers get a sense of what is typical with regard to the phenomenon, and what variation is like. At the end of the synthesis, researchers can make some generalized statements about the phenomenon and about study participants. 

3. Theorizing. Theorizing involves a systematic sorting of the data. During this process, researchers develop alternative explanations of the phenomenon, and then hold this explanation up to determine their fit with the data. Theorizing continues to evolve until the best and most parsimonious explanation is obtained. 

4. Recontextualizing. The process of recon textualization involves further development of the theory to explore its applicability to other settings or groups. In qualitative inquiries whose ultimate goal is theory development, it is the theory that must be recontextualized and generalized. 

    Although the intellectual processes in qualitative analysis are not linear in the same sense that quantitative analysis is, these four processes follow a rough progression over the course of the study.Comprehension occurs primarily while in the field. Synthesis begins in the field but may continue well after the field work is done. Theorizing and recon textualizing are processes that are difficult to undertake before synthesis has been completed.

Qualitative Data Management and Organization

    Qualitative analysis is supported and facilitated by several tasks that help to organize and manage the mass of narrative data, as described next.

Transcription Qualitative

    Data In qualitative studies, audiotaped interviews and field notes are major data sources. Most researchers have transcribed their tapes for analysis. Verbatim transcription is a critical step in preparing for data analysis, and researchers need to ensure that Tran scriptures are accurate, that they validly reflect the totality of the interview experience, and that they facilitate analysis. With regard to the last two points, it is useful to develop transcription conventions or use existing ones. 

    For example, transcribers have to indicate through symbols in the written text who is speaking (eg, “I” for interviewer, “P” for participant), overlaps in speaking turns, time elapsed between utterances when there are gaps, nonlinguistic utterances (eg, sighs, sobs, laughter), emphasis of words, and so on. Silverman (1993) offers some guidance with regard to transcription conventions. Transcription errors are almost inevitable, which means that researchers need to check the accuracy of transcribed data. 

Poland (1995) notes that there are three categories of error:

    1. Deliberate alterations of the data. Transcribers may intentionally try to “fix” data to make the transcriptions look more like what they “should” look like. Such alterations are not done out of malice, but rather reflect a desire to be helpful. For example, transcribers may alter profanities, omit extraneous sounds such as phones ringing, or “tidy up” the text by deleting “ums” and “ uhs .” It is crucial to impress on transcribers the importance of verbatim accounts.

    2. Accidental alterations of the data. Inadvertent transcription errors are far more common. One pervasive problem concerns proper punctuation. The insertion or omission of commas, periods, or question marks can alter the interpretation of the text. The most common error in this category is misinterpretation of actual words and substituting words that change the meaning of the dialogue. 

    For example, the actual words might be, “this was totally moot,” whereas the transcription might read, “this was totally mute.” Researchers should thus never assume that transcriptions are accurate, and should take steps to verify accuracy before analysis gets underground.

    3. Unavoidable alterations. Data are unavoidably altered by the fact that transcriptions capture only a portion of an interview experience. For example, transcriptions will inevitably miss many nonverbal cues, such as body language, intonation, and so on. Researchers should begin the process of analyzing data with the best-possible quality data, and this requires careful training of transcribers, ongoing feedback, and continuous efforts to verify accuracy.

Developing a Categorization Scheme

    Another early step in analyzing qualitative data is to organize them by classifying and indexing them. Researchers must design a mechanism for gaining access to parts of the data, without having to repeatedly to reread the data set in its entirety. This phase of data analysis is essentially a reductionist activity data must be converted to smaller, more manageable units that can be retrieved and reviewed. 

    The most widely used procedure is to develop a categorization scheme and then to code data according to the categories. A preliminary categorization system is sometimes prepared before data collection, but in most cases qualitative analysts develop categories based on a scrutiny of actual data. There are, unfortunately, no straightforward or easy guidelines for this task. 

    The development of a high-quality categorization scheme involves a careful reading of the data, with an eye to identifying underlying concepts and clusters of concepts. The nature of the categories may vary in level of detail or specificity, as well as in level of abstraction. Researchers whose aims are primarily descriptive tend to use categories that are fairly concrete. 

    For example, the category scheme may focus on differentiating various types of actions or events, or different phases in a chronologic unfolding of an experience. In developing a category scheme, related concepts are often grouped together to facilitate the coding process. Studies designed to develop a theory are more likely to involve abstract, conceptual categories. 

    In designing conceptual categories, researchers must break the data into segments, closely examine them, and compare them to other segments for similarities and dissimilarities to determine what type of phenomena are reflected in them, and what the meaning of those phenomena are. (This is part of the process referred to as constant comparison by grounded theory researchers.) The researcher asks questions such as the following about discrete events, incidents, or statements:

What is this?

What's going on?

What does it stand for?

What else is like this?

What is this distinct from?

    Important concepts that emerge from close examination of the data are then given a label that forms the basis for a categorization scheme. These category names are necessarily abstractions, but the labels are usually sufficiently graphic that the nature of the material to which they refer is clear and often provocative. Strauss and Corbin (1998) advise qualitative researchers as follows: “This is very important that the conceptual name or label should be suggested by the context in which an event is located” .

Code qualitative data

    Once a categorization scheme has been developed, the data are reviewed and coded for correspondence dance to or exemplification of identified categories. Coding qualitative material is rarely easy, for several reasons. First, researchers may have difficulty deciding the most appropriate code, or may not fully comprehend the underlying meaning of some aspect of the data. 

    It may take a second or third reading of the material to grasp its nuances. Second, researchers often discover in going through the data that the initial category system was incomplete or inadequate. It is common for themes to emerge that were not initially identified. When this happens, it is risky to assume that the theme failed to appear in materials that have already been coded. A concept might not be identified as salient until it has emerged three or four times. 

    In such a case, it would be necessary to reread all previously coded material to have a truly complete grasp of that category. Another issue is that narrative materials are usually not linear. For example, paragraphs from transcribed interviews may contain elements relating to three or four different categories, embedded in a complex fashion. 

    It is sometimes recommended that a single member of the research team code the entire data set, to ensure the highest possible coding consistency across interviews or observations. Nevertheless, at least a portion of the interviews should be coded by two or more people early in the coding process, to evaluate and ensure intercoder reliability.

Manual Methods of Organizing Qualitative Data

    Qualitative data traditionally have been organized manually through a variety of techniques. Although manual methods have a long and respected history, they are becoming increasingly outmoded as a result of the widespread availability of personal computers that can be used to perform the filing and indexing of qualitative material. 

    Here, we briefly describe some manual methods of data organization and management, and the next section describes computer methods. When the amount of data is small, or when a category system is simple, researchers sometimes use colored paper clips or colored Post-It Notes to code the content of the narrative materials. 

    For example, if we were analyzing responses to an unstructured question about women's attitudes toward the menopause, we might use blue paper clips for text relating to loss of fertility, red clips for text on menopausal side effects, yellow clips for text relating to aging, and so on. Then we could pull out all responses with a certain color clip to examine one aspect of menopausal attitudes at a time develop conceptual files. 

    In this approach, researchers create a physical file for each category in their coding scheme, and insert all material relating to that category into the file. To create conceptual files, researchers must first go through all the data, writing relevant codes in the margins. Then they cut up a copy of the material by category area, and place the cut-out excerpt into the file for that category . 

    All of the content on a particular topic then can be retrieved by going to the applicable file folder. Creating such conceptual files is a cumbersome and labor-intensive task. This is particularly true when segments of the narrative materials have multiple codes, as is the case for the excerpt. In such a situation, there would need to be six copies of the paragraph one for each file corresponding to the six codes. 

    Researchers must also be sensitive to the need to provide enough context that the cut-up material can be understood (eg, including material preceding or following the directly relevant materials). Finally, researchers must usually include pertinent administrative information. 

    For example, if the data were from transcribed interviews, informants would be assigned an ID number. Each excerpt filed in the conceptual file would need to include the appropriate ID number so that researchers could, if necessary, obtain additional information from the master copy.

Computer programs for managing qualitative data

    Computer programs remove the drudgery of cutting and pasting pages and pages of narrative material and are almost becoming indispensable research tools. These programs permit the entire data file to be entered onto the computer, each portion of an interview or observational record coded, and then portions of the text corresponding to specified codes retrieved and printed (or shown on a screen) for analysis. 

    The current generation of programs has features that go beyond simple indexing and retrieval they offer possibilities for actual analysis and integration of data. The most widely used computer programs for qualitative data have been designed for personal computers , and most are for use with IBM-compatible computers, rather than Macintoshes. 

    Some examples of major software include the following: The Ethnograph , MARTIN, and QUALPRO (all for use with IBM-type PCs), and the HyperQual2 (for use with Macs). A new generation of programs, which Weitzman and Miles (1995) categorize as “conceptual network builders,” have been developed to help users form late and represent conceptual schemes through a graphic network of links. 

    ATLAS/TI and NUDIST (Nonnumerical Unstructured Data Indexing, Searching, and Theorizing) are two of the most serious contenders in the category of coding and theory building software. Barry (1998) compared these two programs on two dimensions: the structure of the software and the complexity of the research project. Barry views ATLAS/TI's strengths as its visual and spatial qualities, its interlinkages, and its creativity. 

    The ability to create hyperlinks, which is offered by ATLAS/TI, allows for building nonhierarchical networks . On the other hand, NUD*IST's strengths include its project management functions, its structured organization, and its sophisticated level of searching. With NUD*IST, hierarchies of coding categories can be built and developed.

    Researchers typically begin by entering quasistatic data into a word processing program (eg, Word or WordPerfect). The data are then imported into the analysis program. A few qualitative data management programs (eg, QUALPRO, HyperQual2) allow text to be entered directly rather than requiring an import file from a word processor. Next, the researcher marks the boundaries (ie, the beginning and end) of a segment of data, and then codes the data according to the developed category system. 

    In some programs, this step can be done directly on the computer screen in a one-step process, but others require two steps. The first step involves the numbering of lines of text and the subsequent printing out of the text with the line numbers appearing in the margins. Then, after coding the paper copy, the researcher tells the computer which codes go with which lines of text. Most programs permit overlapping coding and the nesting of segments with different codes within one another. 

    All major programs permit editing. That is, codes can be altered, expanded, or deleted, and the boundaries of segments of text can be changed. These programs also provide screen displays or printouts of collated segments however, some programs do this only on a file-by-file basis (eg, one interview at a time), rather than allowing researchers to retrieve all segments with a given code across files. Beyond these basic features, available programs vary in the enhancements they offer. The following is a partial list of features available in some programs:

• Automatic coding according to words or phrases it found in the data

• Compilation of a master list of all codes used

• Selective searches (ie, restricted to cases with certain characteristics, such as searching for a code only in interviews with women)

• Searches for co-occurring codes (ie, retrieval of data segments to which two or more specific codes are attached)

• Retrieval of information on the sequence of coded segments (ie, on the order of appearance of certain codes)

• Frequency count of the occurrence of codes

• Calculation of code percentages, in relation to other codes

• Calculation of the average size of data segments

• Listing and frequency count of specific words in the data files

• Searches for relationships among coded categories Several of these enhancements have led to a blurring in the distinction between qualitative data management and data analysis.

    Computer programs offer many advantages for organizing qualitative data, but some people prefer manual indexing because it allows them to get closer to the data. Others have raised concerns about using programs for the analysis of qualitative data, objecting to having a process that is basically cognitive turned into an activity that is mechanical and technical. Seidel (1993), for example, noted that there is a dark side of computer technology in qualitative data analysis. 

    He proposed that these technological advances can lead to research behavior that he calls “analytic madness,” which he claimed can disturb qualitative sensibilities. Seidel described three forms of such behavior. First, researchers can become infatuated with the amount of data the computer can deal with, which can lead to sacrificing resolution or insight for scope. 

    Second, in assigning code words to identify portion of text, researchers can mistakenly consider these codes as significant just because they appear in certain quantities. Qualitative researchers may not analyze and critically evaluate the code words they have labeled and counted. Finally, the use of computers can distance or separate researchers from their data. Agar (1993) also urged qualitative researchers to remember that computer programs represent only a portion of the research process. 

    When that portion is taken for the whole, researchers can get the right answer to the wrong question. Qualitative analysis at times emphasizes the interrelated detail in a small, limited number of cases instead of common properties among a large number of cases. For that, Agar stressed that one needs a small amount of data and a lot of right brain. Despite these concerns, many qualitative researchers have switched to computerized data management. Proponents insist that it frees up their time and permits them to pay greater attention to more important conceptual issues.

Post a Comment

0Comments

Give your opinion if have any.

Post a Comment (0)

#buttons=(Ok, Go it!) #days=(20)

Our website uses cookies to enhance your experience. Check Now
Ok, Go it!