Assess Validity of Data IV Confirmability,Transferability In Lincoln and Guba's (1985) framework.
Confirmability
Confirmability refers to the objectivity or neutrality of the data, that is, the potential for congruence between two or more independent people about the data's accuracy, relevance, or meaning. Bracketing (in phenomenological studies) and maintaining a reflexive journal are methods that can enhance confirmability, although these strategies do not actually document that it has been achieved. Inquiry audits can be used to establish both the dependability and confirmability of the data. For an inquiry audit, researchers develop an audit trail, that is, a systematic collection of materials and documentation that allows an independent auditor to come to conclusions about the data.
There are six classes of records that are of special interest in creating an adequate audit trail:
(1) the raw data (eg,
field notes, interview transcripts); (2) data reduction and analysis products
(eg, theoretical notes, documentation on working hypotheses); (3) process notes
(eg, methodological notes, notes from member check sessions); (4) materials
relating to researchers' intentions and dispositions (eg, reflective notes);
(5) instrument development information (eg, pilot forms); and (6) data
reconstruction products (eg, drafts of the final report). Once the audit trail
materials are assembled, the inquiry auditor proceeds to audit, in a fashion
analogous to a financial audit, the trustworthiness of the data and the
meanings attached to them. Although the auditing task is complex, it can serve
as an invaluable tool for persuading others that qualitative data are worthy of
confidence. Relatively few comprehensive inquiry audits have been reported in
the literature, but some studies report partial audits or the assembling of
auditable materials. Rodgers and Cowles (1993) present useful information about
inquiry audits.
Transferability
In Lincoln and Guba's (1985) framework, transferability refers essentially to
the generalizability of the data, that is, the extent to which the findings can
be transferred to other settings or groups. This is, to some extent, a sampling
and design issue rather than an issue relating to the soundness of the data per
se. However, as Lincoln and Guba note, the responsibility of the investigator
is to provide sufficient descriptive data in the research report so that
consumers can evaluate the applicability of the data to other contexts: “Thus
the naturalist cannot specify the external validity of an inquiry; he or she
can provide only the thick description necessary to enable someone interested
in making a transfer to reach a conclusion about whether transfer can be .
Other
Criteria for Assessing Quality in Qualitative Research Qualitative researchers
who take steps to enhance, assess, and document quality are most likely to use
Lincoln and Guba's criteria. However, as noted previously, other criteria have
been proposed, and new ways of thinking about quality assessments for
qualitative studies are emerging. Whittemore, Chase, and Mandle (2001), in
their synthesis of qualitative criteria, use the term validity as the
overarching goal. Although this term has been eschewed by many qualitative
researchers as a "translation" from quantitative perspectives,
Whittemore and her colleagues argue that validity is the most appropriate term.
According to their view, the dictionary definition of validity as “the state or
quality of being sound, just, and wellfounded” lends itself equally to
qualitative and quantitative research.
In their synthesis of criteria that can be used to develop evidence of validity in qualitative studies, Whittemore and associates proposed four primary criteria and six secondary criteria. In their view, the primary criteria are essential to all qualitative inquiry, whereas secondary criteria provide supplementary benchmarks of validity and are not relevant to every study. They argue that judgment is needed to determine the optimal weight given to each of the 10 criteria in specific studies. The primary criteria include credibility (as discussed earlier), authenticity, criticality, and integrity. The six secondary criteria include explicitness, vividness, creativity, thoroughness, and congruence. Table 18-5 lists these 10 criteria and the assessment questions relevant to achieving each. The questions are ones that can be used by qualitative researchers in their efforts to enhance the rigor of their studies and by consumers to evaluate the quality of the evidence studies yield. A scrutiny of Table 18-5 reveals that the list contains many of the same concerns as those encompassed in Guba and Lincoln's four criteria. This overlap is further illustrated by considering techniques that can be used to contribute evidence of study validity according to these 10 criteria. As shown in Table 18-6, many of the techniques previously described in this chapter, as well as some methods discussed in earlier chapters, are important strategies for developing evidence of validity. These techniques can be used throughout the data collection and analysis process, and in preparing research reports.
Meadows and Morse (2001) discuss the components of rigor in qualitative studies
and, similar to Whittemore and colleagues, conclude that the traditional terms
of validity and reliability are appropriate in qualitative studies. Meadows and
Morse argued that by not using traditional quantitative terminology,
qualitative research has not yet taken its rightful place in the world of
evidence and science. They call for the use of three components of rigor:
verification, validation, and validity. Verification refers to strategies
researchers use to enhance validity in the process of conducting a high quality
study. Verification strategies include the conduct of a thorough literature
review, bracketing, theoretical sampling, and data saturation. Validation deals
with the researcher's efforts to assess validity, apart from efforts to enhance
it. Validation techniques include those discussed earlier, such as member
checking, inquiry audits, triangulation, and so on. The final step in achieving
validity involves the use of external judges to assess whether the project as a
whole is trustworthy and valid.
Learning Outcomes
Measurement
involves the assignment of numbers to objects to represent the amount of an
attribute, using a specified set of rules. Researchers strive to develop or use
measurements whose rules are isomorphic with reality.
Few
quantitative measuring instruments are fallible. Sources of measurement errors
include situational contaminants, response-set biases, and transitory personal
factors, such as fatigue.
Obtained
scores from an instrument consist of a true score component (the value that
would be obtained for a hypothetical perfect measure of the attribute) and an
error component, or error of measurement, that represents measurement
inaccuracies.
Reliability,
one of two primary criteria for assessing a quantitative instrument, is the
degree of consistency or accuracy with which an instrument measures an
attribute. The higher the reliability of an instrument, the lower the amount of
error in scores obtained.
There
are different methods for assessing an instrument's reliability and for
computing a reliability coefficient. A reliability coefficient typically is
based on the computation of a correlation coefficient that indicates the
magnitude and direction of a relationship between two variables.
Correlation
coefficients can range from 1.00 (a perfect negative relationship) through zero
to +1.00 (a perfect positive relationship). Reliability coefficients usually
range from .00 to 1.00, with higher values reflecting greater reliability.
The
stability aspect of reliability, which concerns the extent to which an
instrument yields the same results on repeated administrations, is evaluated by
test-retest procedures.
The
internal consistency aspect of reliability, which refers to the extent to which
all the instrument's items are measuring the same attribute, is assessed using
either the split-half reliability technique or, more likely, Cronbach's alpha
method.
When
the reliability assessment focuses on equivalence between observers in rating
or coding behaviors, estimates of interrater (or interobserver) reliability are
obtained.
Reliability
coefficients reflect the proportion of true variability in a set of scores to
the total variability obtained.
Validity
is the degree to which an instrument measures what it is supposed to be
measuring.
Face
validity refers to whether the instrument appears, on the face of it, to be
measuring the appropriate construct.
Content
validity is concerned with the sampling adequacy of the content being measured.
Expert judgments can be used to compute a content validity index (CVI), which
provides ratings across experts of the relevance of items on a scale.
Criterion-related
validity (which includes both predictive validity and concurrent validity)
focuses on the correlation between the instrument and an outside criterion.
Construct
validity is an instrument's adequacy in measuring the focal construct. One
construct validation method is the known-groups technique, which contrasts
scores of groups presumed to differ on the attribute; another is factor
analysis, a statistical procedure for identifying unitary clusters of items or
measures.
Give your opinion if have any.