Health Care Research and Statistical Techniques

Afza.Malik GDA
0

Statistical Techniques and Health Care Research

Health care Research and Statistical Techniques

Statistical Techniques,ANCOVA Experimental and Non-experimental StudiesCharacteristics of Variables,Extraneous Variables and  ANOVA,Statistical Characteristics of ANOVA,Null Hypothesis in ANOVAAcceptance and Rejection of Null Hypothesis,MANOVA,Repeated Measure ANOVA,Logistic Regression,Chi-square.

Statistical Techniques

    Analysis of covariance (ANCOVA) is a statistical technique that combines analysis of variance (ANOVA) with regression to measure the differences among group means. The advantages of ANCOVA include the ability to reduce the error variance in the outcome measure and the ability to measure group differences after allowing for other differences between subjects. 

    The error variance is reduced by controlling for variation in the dependent measure that comes from variables measured at the interval or ratio level (called covariates) that influence all the groups being compared. The covariate contributes to the variation and reduces the magnitude of the differences among groups. 

    In ANCOVA the variation from this variable is measured and extracted from the within (or error) variation. The effect is the reduction of error variance and therefore an increase in the power of the analysis.

ANCOVA Experimental and Non-experimental Studies

    ANCOVA has also been used in both experimental and nonexperimental studies to “equate” the groups statistically. When the groups differ on some variable, ANCOVA is used to reduce the impact of that difference. 

    Although ANCOVA has been widely used for such statistical “equalization” of groups, there is controversy about such efforts, and careful consideration should be given to the appropriateness of the manipulation.

Characteristics of Variables 

    As with ANOVA there are one or more categorical variables as independent variables; the dependent variable is continuous and meets the requirements of normal distribution and equality of variance across groups. The co-variate is an interval or ratio level measure.

    There are additional assumptions to be met in ANCOVA, and these are very important to the valid interpretation of results. There must be a linear relationship between the covariate and the dependent variable, and ANCOVA is most effective when the correlation is equal to or greater than .30. 

    The direction and strength of the relationship between the covariate and dependent variable must be similar in each group. This assumption is called homogeneity of regression across groups.

Extraneous Variables and  ANOVA

    ANCOVA is an extension of the ANOVA model that reduces the error term by removing additional sources of variation. It is a means of controlling extraneous variation. As with other types of analysis of variance, post hoc tests are used for pairwise comparison of group means.

Statistical Characteristics of ANOVA

    Analysis of variance (ANOVA) is a parametric statistical test that measures differences between two or more mutually exclusive groups by calculating the ratio of between-to-within-group variance, called the F ratio. It is an extension of the t test, which compares two groups. The independent variable(s) are categorical (measured at the nominal level). 

    The dependent variable must meet the assumptions of normal distribution and equal variance across the groups. A one-way ANOVA means that there is only one independent variable (often called factor), a two-way ANOVA indicates two independent variables, and an n way ANOVA indicates that the number of independent variables is defined by n.

Null Hypothesis in ANOVA

    The null hypothesis in ANOVA is that all groups are equal and drawn from the same population. To test this assumption, three measures of variation are calculated. The total variation is a measure of the variability of all subjects around the grand mean and is composed of within group variation and between group variation. 

    Within group variation is a measure of how much the scores of subjects within a group vary around the group mean. Between group variation is a measure of how much each group's mean varies from the grand mean or of how much difference exists between the groups. 

    Quantifying total between and within group variation is accomplished by calculating a sum of squares (the sum of the squared deviations of each of the scores around the respective mean) for each component of the variation.

    When the null hypothesis is true, the groups' scores overlap to a large extent, and the within group variation is greater than the between group variation. When the null hypothesis is false, the groups' scores show little overlapping, and the between groups variation is greater.

Acceptance and Rejection of Null Hypothesis

    When the ratio of between to within group variation (F ratio) is significant, the null hypothesis is rejected, indicating a difference between the groups. When more than two groups are being compared, however, it cannot be determined from the F test alone which groups differ from the others. 

    In other words, a significant F test does not mean that every group in the analysis is different from every other group. To determine where the significant differences lie, further analysis is required. 

    Two types of comparisons can be made among group means. They include post hoc (after the fact) comparisons and a priori (planned) comparisons based on hypotheses stated prior to the analysis.

    A variety of post hoc techniques exist. The purpose of all is to decrease the likelihood of making a Type I error when making multiple comparisons. The Scheffe test is frequently reported. 

    The formula is based on the usual formula for the calculation of a t-test or F ratio, but the critical value for determining statistical significance is changed according to the number of comparisons to be made. A Bonferroni correction involves dividing the desired alpha (say .05) by the number of comparisons.

    The least significant difference (LSD) test is equivalent to multiple r tests. The modification is that a pooled estimate of variance is used rather than variance common to groups being compared. Tukey's honestly significant difference (HSD) is the most conservative comparison test and as such is the least powerful. 

    The critical values for Tukey remain the same for each comparison, regardless of the total number of means to be compared. Student Newman-Keuls is similar to Tukey's HSD, but the critical values do not stay the same. They reflect the variables being compared.     

    Tukey's wholly significant difference (WSD) uses critical values that are the average of those used in Tukey's HSD and Newman-Keuls. It is therefore intermediate in conservatism between those two measures.

    Planned comparisons, or a priori contrasts, are based on hypotheses stated before data are collected. Prespecified contrasts that are orthogonal (statistically unrelated) to each other may be developed and tested. Such comparisons are more powerful than post hoc contrasts.

MANOVA

    With two or more independent variables in an analysis, interactions between the independent variables can be tested. Testing for an interaction addresses the question of whether or not the results of a given treatment vary depending on the groups or conditions in which it is applied. 

    An ANOVA may include more than one dependent variable. Such an analysis is usually referred to as multivariate analysis of variance (MANOVA) and allows the researcher to look for relationships among dependent as well as independent variables. 

    When conducting a MANOVA, the assumptions underlying the univariate model still apply, and in addition the dependent variable should have a “multivariate normal distribution with the same variance covariance matrix in each group” ( Norusis , 1994, p. 58). 

    The requirement that each group will have the same variance covariance matrix means that the homogeneity of variance assumption is met for each dependent variable and that the correlation between any two dependent variables must be the same in all groups. Box's M is a measure of the multivariate test for homogeneity of variance.

    In the univariate model, the F value is tested for significance. In the multivariate model there are four outcome measures. 

    They include Wilks's lambda, which represents the error variance; Pillai Bartlett trace, which represents the sum of the explained variances; Roy's greatest characteristic root, which is based on the first discriminant variant; and Hotelling Lawley trace, which is the sum of the between and within sums of squares for each of the discriminant variants. 

    Wilks's lambda is the most widely used. Analysis of variance is commonly used to test for group differences. Multivariate analysis of variance includes more than one dependent variable.

Repeated Measure ANOVA

    Repeated measures analysis of variance is an extension of analysis of variance (ANOVA) that reduces the error term by partitioning our individual differences that can be estimated from the repeated measurement of the same subjects. There are two main types of repeated measures designs (also called within-subjects designs). 

    One involves taking repeated measures of the same variable(s) over time on a group or groups of subjects. The other involves exposing the same subjects to all levels of the treatment. This is often referred to as using subjects as their own controls. Because the observations are not independent of each other, there is correlation among the outcome measures. 

    This necessitates an assumption called compound symmetry. To meet this assumption, the correlations across the measurements (time points) must be the same, and the variances should be equal across measurements. This is important because the general robustness of the ANOVA model does not withstand much violation of this assumption.

    Repeated measures ANOVA is a particularly interesting technique because health care providers tend to take repeated measures on clients, and it often makes sense to do so with research subjects as well. There are stringent requirements for this analysis, however. 

    The most important is meeting the criteria for compound symmetry. This assumption is often violated, leading to improper interpretation of results. Most computer programs provide a test of this assumption. If the assumption is not met, several alternatives are available.

    First, rather than the univariate approach, in which the repeated measures are treated. as within-subjects factors, one might use a multivariate approach (MANOVA). In MANOVA, the repeated measures would be treated as multiple dependent variables. 

    An- other approach is to use an epsilon correction. The degrees of freedom are multiplied by the value of epsilon, and the new degrees of freedom, which are more conservative, are used to test the F value for significance.

    The problems with repeated measures analyzes may be seen as “the carry-over effect, the latent effect, and the order or learning effect” (pp. 107-108). When subjects are exposed to more than one treatment, previous treatments may still be having an effect, that is, may be carried over. 

    An interaction with a previous treatment is referred to as a latency effect. This would occur if exposure to one treatment had an enhancing or depressing effect on a subsequent treatment. Randomization of the order of treatment is used to control the order of learning effect. 

    Repeated measures ANOVA is a very useful technique for research by health professionals. There are fairly stringent requirements for the analysis, however.

    Correlation is a procedure for quantifying the linear relationship between two or more variables. It measures the strength and indicates the direction of the relationship. The Pearson product moment correlation coefficient (r) is the usual method by which the relation between two variables is quantified. 

    There must be at least two variables measured on each subject, and although interval or ratio-level data are most commonly used, it is also possible in many cases to obtain valid results with ordinal data. 

    Categorical variables may be coded for use in calculating correlations and regression equations. Although correlations can be calculated with data at all levels of measurement, certain assumptions must be made to generalize beyond the sample statistic. 

    The sample must be representative of the population to which the inference will be made. The variables that are being correlated must each have a normal distribution. The relationship between the two variables must be linear. For every value of one variable, the distribution of the other variable must have approximately equal variability. This is called the assumption of homoscedasticity.

    The correlation coefficient is a mathematical representation of the relationship that exists between two variables. The correlation coefficient may range from +1.00 through 0.00 to -1.00. A +1.00 indicates a perfect positive relationship, 0.00 indicates no relationship, and -1.00 indicates a perfect negative relationship. 

    In a positive relationship. as one variable increases, the other increases. In a negative relationship, as one variable increases, the other decreases.

The strength of correlation coefficients has been described as follows:

    00-25-little if any 26-49-low 50-69-moderate 70-89-high 90-1.00-very high (Munro, 1997, p. 235).

    The coefficient of determination, r, is often used as a measure of the “meaningfulness” of r. This is a measure of the amount of variance the two variables share. It is obtained by squaring the correlation coefficient.

    Correlational techniques may be used for control of extraneous variation. Partial correlation measures the relationship between two variables after statistically controlling for the influence of a confounding variable on both of the variables being correlated. It is usually expressed as rus, which indicates the correlation between variables 1 and 2, with the effect of variable 3 removed from both 1 and 2. 

    Semi partial correlation is the correlation of two variables with the effect of a third variable removed from only one of the variables being correlated. It is usually expressed as Figs which indicates the correlation between variables 1 and 2, with the effect of 3 removed only from variable 2.

    Multiple correlation is a technique for measuring the relationship between a dependent variable and a weighted combination of independent variables. The multiple correlation is expressed as R. R indicates the amount of variance explained in the dependent variable by the independent variables. Canonical correlation measures the relationship between two sets of variables and is expressed as R.

    There are measures other than the Pearson r for measuring relationships. Before the advent of computers, “shortcut” methods of calculation were developed for certain circumstances. Three such measures are phi, point biserial, and Spearman rho. These measures usually give the same result as r, their only advantage is for doing hand calculations. 

    Phi is used with two dichotomous variables and is often reported in conjunction with chi square. Point-biserial can be used to calculate the relationship between one dichotomous and one continuous variable. Spearman the can be used to measure the relationship between two rank-ordered variables.

    There are also nonparametric measures of relationship. These are considered “distribution-free,” that is, the assumption of normal distribution of the two variables does not have to be met. Kendall's tau is a nonparametric technique for measuring the relationship between two ranked (ordinal) variables. The contingency coefficient can be used to measure the relationship between two nominal variables. It is based on the chi square st artistic.

    There are also formulas that can be used to estimate the correlation coefficient, r. Biserial can be used when one variable is dichotomized and the other is continuous. Dichotomized means that the variable has been made dichotomous cut into two levels from a variable that would have been naturally continuous. 

    Biserial estimates what r would be if you changed the dichotomized variable into a continuous variable. The tetrachoric coefficient is an estimate of r based on the relationship between two dichotomized variables.

    Eta, sometimes called the correlation ratio, is referred to as the universal measure of the relationship between two variables. The values for eta range from 0 to 1. It can be used to measure nonlinear as well as linear relationships. When it is used with two continuous variables that have a linear relationship. it reduces to r.

    Correlational techniques are used to explore and test relationships among variables. They serve as the basis for developing prediction equations through regression techniques.

    Logistic regression is used to determine which variables affect the probability of the occurrence of an event. In logistic regression the independent variables may be at any level of measurement from nominal to ratio. The dependent variable is categorical, usually a dichotomous variable.

    Although it is possible to code the dichotomous variable as 1/0 and run a multiple regression or use discriminant function analysis for categorical outcome measures (two or more categories), this is generally not recommended. Multiple regression and discriminant function are based on the method of least squares, whereas the maximum likelihood method is used in logistic regression. 

    Because the logistic model is nonlinear, the iterative approach provided by the maximum likelihood method is more appropriate. In addition to providing a better fit with the data, logistic regression results include odds ratios that lend interpretability to the data. 

    The odds of an outcome being present as a measure of association has found wide use, especially in epidemiology, because the odds ratio approximates how much more likely (or unlikely) it is for the outcome to be present given certain conditions. Still odd. Ratio is defined as the probability of occurrence over the probability of nonoccurrence.

    The probability of the observed results, given the parameter estimates, is known as the likelihood. “Since the likelihood measure is a small number, less than 1, it is customary to use minus 2 times the log of the likelihood as a measure of how well the estimated model fits the data” (Norusis, 1994, p. 10 ) . 

    In logic tic regression, comparison of observed to predicted values is based on the log likelihood (LL) function. A good model is one that results in a high likelihood of the observed results. A nonsignificant -2 LL indicates that the data fit the model.

    The goodness of fit statistic compares the observed probabilities to those predicted by the model. Assessment of this is also provided in a classification table where percentages of correct predictions are provided. This statistic has a chi-square distribution. A nonsignificant statistic indicates that the data fit the model.

    The model chi-square tests the null hypothesis that the coefficients for all the independent variables equal 0. It is equivalent to the F test in regression. A significant result indicates that the independent variables are contributing significantly. 

    As in regression, one must assess the significance of each predictor. In multiple regression the b-weights are used in the calculation of the prediction equation. In logistic regression the b weights are used to determine the probability of the occurrence of an event.

    As with all methods of regression it is of utmost importance to select variables for inclusion in the model on the basis of clear scientific rationale. Following the fit of the model, the importance of each variable included in the model should be verified ( Norusis , 1994). 

    This includes examination of the Wald statistic, which provides a measure of the significance (p) value for each variable. Additionally, one can test the model by systematically including and excluding the predictors. Variables that do not contribute to the model on the basis of these criteria should be eliminated and a new model fit. Once a model has been developed that contains the essential variables, the addition of interaction terms should be considered.

Logistic Regression

    Logistic regression has been reported in the medical literature for some time, particularly in epidemiological studies. Recently, it has become more common in nursing research. This is the result of a new appreciation of the technique and the availability of software to manage the complex analysis. 

    This multivariate technique for assessing the probability of the occurrence of an event requires fewer assumptions than does regression or discriminant function analysis and provides estimates in terms of odds ratios that add to the understanding of the results.

    Nonparametric statistics are techniques that are not based on assumptions about normality of data. When parametric tests of significance are used, at least one population parameter is being estimated from sample statistics. 

    To arrive at such an estimate, certain assumptions must be made; the most important one is that the variable measured in the sample is normally distributed in the population to which a generalization will be made. 

    With nonparametric tests there is no assumption about the distribution of the variable in the population. For that reason, non-parametric tests are often called distribution free.

    At one time, level of measurement was considered a very important determinant in the decision to use parametric or nonparametric tests. Some authors said that parametric tests should be reserved for use with interval and ratio level data. More recent studies, however, have shown that the use of parametric techniques with ordinal data rarely distorts the results.

    The calculations involved in nonparametric techniques are much easier than those associated with parametric techniques, but the use of computers makes that of little concern. Nonparametric techniques are valuable when using small samples and when there are distortions of the data that seriously violate the assumptions underlying the parametric technique.

Chi-square

    Chi-square is the most frequently reported nonparametric technique. It is used to compare the actual number (or frequency) in each group with the “expected” number. The expected number can be based on theory, previous experience, or comparison groups. 

    Chi-square tests whether or not the expected number differs significantly from the actual number. Chi-square is the appropriate technique when variables are measured at the nominal level. It may be used with two or more mutually exclusive groups.

     When the groups are not mutually exclusive , as when the same subjects are measured twice, an adaptation of chi-square, the McNamar test, may be appropriate. The McNamar test can be used to measure change when there are two dichotomous measures on the subjects.

    When comparing groups of subjects on ordinal data, two commonly used techniques are the Mann-Whitney U, which is used to compare two groups and is thus analogous to the t test, and Kruskal-Wallis H, which is used to compare two or more groups and is thus analogous to the parametric technique analysis of variance.

    When one has repeated measures on two or more groups and the outcome measure is not appropriate for parametric techniques, two nonparametric techniques that may be appropriate are the Wilcoxon matched pairs signed rank test and the Friedman matched samples. 

    The Wilcoxon matched pairs are analogous to the parametric paired t test, and the Friedman matched samples is analogous to a repeated measures analysis of variance.

    In addition to nonparametric techniques for making group comparisons, there are nonparametric techniques for measuring relationships. There is some confusion about these techniques. 

    For example, point-biserial and Spearman rho are often considered non-parametric techniques but are actually short-cut formulas for the Pearson product moment correlation (r). Biserial and tetrachoric coefficients are estimates of r, given certain conditions.

    True nonparametric measures of relationship include Kendall's tau and the contingency coefficient. Kendall's tau was developed as an alternative procedure for Spearman rho. It may be used when measuring the relation between two ranked (ordinal) variables. The contingency coefficient can be used to measure the relationship between two nominal level variables. The calculation of this coefficient is based on the chi-square statistic.

    Nonparametric techniques should be considered if assumptions about the normal distribution of variables cannot be met. These techniques, although less powerful, provide a more accurate appraisal of group differences and relationships among variables when the assumptions underlying the parametric techniques have been violated.

Regression

    Regression is a statistical method that makes use of the correlation between two variables and the notion of a straight line to develop an equation that can be used to predict the score of one of the variables, given the score of the other. 

    In the case of a multiple correlation, regression is used to establish a prediction equation in which the independent variables are each assigned a weight based on their relationship to the dependent variable, while controlling for the other independent variables.

    Regression is useful as a flexible technique that allows prediction and explanation of the interrelationships among variables and the use of categorical as well as continuous variables. Regression literally means a falling back towards the mean. With perfect correlations there is no falling back; Using standardized scores, the predicted score is the same as the predictor. 

    With less than perfect correlations there is some error in the measurement; the more error, the more regression towards the mean. The regression equation consists of an intercept constant (a) and the b's associated with each independent variable. Given those elements and an individual's score on the independent variables, one can predict the individual's score on the dependent variable. The intercept constant (a) is the value of the dependent variable when the independent variable equals zero. It is the point at which the regression line intercepts the Y axis.

    The letter b is called the regression coefficient or regression weight; it is the rate of change in the dependent variable with a unit change in the independent variable. It is a measure of the slope of the regression line, which is the “line of best fit” and passes through the exact center of the data in a scatter diagram. Beta is the standardized regression coefficient.

    In multiple regression the multiple correlation (R) and each of the b-weights are tested for significance. In most reports the squared multiple correlation, R, is reported, as that is a measure of the amount of variance accounted for in the dependent variable. 

    A significant R' indicates that a significant amount of the variance in the dependent variable has been accounted for. Testing the b-weight tells us whether the independent variable associated with it is contributing significantly to the variance accounted for in the dependent variable.

    Although variables at all levels of measurement may be entered into the regression equation, nominal-level variables must be specially coded prior to entry. Three main types of coding are used: dummy, effect, and orthogonal. Regardless of the method of coding used, the overall R is the same, as is its significance. The differences lie in the meaning attached to testing the b-weights for significance. 

    With dummy coding the b-weight represents the difference between the mean of the group represented by that b and the group assigned Os through ours. In effect coding the b's represent the difference between the mean of the group associated with that b-weight and the grand mean. 

    With orthogonal coding the b-weight measures the difference between two means specified in a hypothesized contrast. Interactions among variables also maybe coded and entered into the regression equation.

    When using regression, it is of utmost importance to select variables for inclusion in the model on the basis of clear scientific rationale. The method for entering variables into the equation is important, as it affects the interpretation of the results. Variables may be entered all at once, one at a time, or in subsets. 

    Decisions about method of entry may be statistical, as in stepwise entry (where the variable with the highest correlation with the dependent variable is entered first), or theoretical. Stepwise methods have been criticized for capitalizing on chance related to imperfect measurement of the variables being correlated. 

    It is generally recommended that decisions about the order of entry of variables into the regression equation should be made on the basis of the research questions being addressed.

    Problems with multiple regression include a high degree of interrelatedness among the independent variables, referred to as multi collinearity. Selection of variables based on theoretical considerations, followed by careful screening of variables and testing of assumptions prior to analysis, can reduce potential problems. 

    If multicollinearity is a problem, decisions must be made about which variables to eliminate. Residual analysis, conducted as part of the regression procedure, can contribute an additional check on whether or not the assumptions underlying the analysis have been met.

    Multiple regression is the most commonly reported statistical technique in health care research. It can be used for both explanation and prediction but is more commonly reported as a method for explaining the variability in an outcome measure.

    The t test involves an evaluation of means and distributions of two groups. The test, or Student's t test, is named after its inventor, William Gosset , who published under the pseudonym Student. 

    Gosset invented the t test as a more precise method of comparing groups. The 7 distributions are a set of means of randomly drawn samples from a normally distributed population. They are based on the sample size and vary according to the degrees of freedom.

    The test reflects the probability of getting a difference of a given magnitude in groups of a particular size with a certain variability if random samples drawn from the same population were compared. Three factors are included in the analysis: difference between the group means, size of each group, and variability of scores within the groups.

    Given the same mean difference, an increase in group size increases the likelihood of a significant difference between two groups, and an increase in group variability decreases the likelihood of significant difference. Increased variability increases the error term and the likelihood of overlap between the scores of the two groups, thereby diminishing the difference between them.

    There are three t tests. The first is used to compare two mutually exclusive groups when the dependent variable is normally distributed and the variances of the two groups are equal. The equal variance assumption is called homogeneity of variance and indicates that the groups are drawn from the same population. This version of the t test is referred to as the pooled or equal variance to test because the denominator contains the variance for all the subjects.

    If the assumption of homogeneity of variance is not met, a second formula, called the separate or unequal variance t test, can be used. In that case the variance is not pooled for all subjects; instead, the separate variances for each group are contained in the denominator.

    When the two sets of scores are not independent, as when two measures are taken on the same subjects or matched pairs are used, a paired or correlated 7 test formula can be used. The formula incorporates the correlation between the two sets of scores. The tests are very useful when two groups or two correlated measures are being shared. 

    Although analysis of variance can accomplish the same results, the t test continues Stress The term “stress” first appeared in the Cumulative Index to Nursing and Allied Health Literature (CINAHL) in 1956. Nursing's interest in stress as a focus of research has mushroomed since 1970. 

    Although the word “stress” is familiar to many and has become part of our everyday vocabulary, the term conveys divergent meanings, and multiple theories have been proposed to explain it. 

    Most of the theories attempting to describe and explain stress as a human phenomenon can be categorized under one of three very different orientations to the concept: response-based, stimulus-based, and transaction-based. 

    The response-based orientation was developed by Selye (1976), who defined stress as a nonspecific response of the body to any demand. That is, regardless of the cause, situational context, or psychological interpretation of the demand, the stress response is characterized by the same chain of events or same pattern of physiological correlates.

    Defined as a response, stress indicators become the dependent variables in research studies. Nurse researchers who have used the response-based orientation measure catechloramines', cortisol, urinary Na/K ratio, vital signs, brain waves, electrodermal skin responses, and cardiovascular complaints as indicators of stress. 

    The demand component of Selye's definition is treated as an independent variable, whereas hospitalization surgery or critical care unit transfer were commonly the assumed stressor in much of the nursing research using this orientation. 

    The response-based model of stress is not consistent with nursing's philosophical presuppositions that each individual is unique and that individuals respond holistically and often differently to similar situations (Lyon & Werner, 1987). The stimulus-based theoretical explanation treats stress as a stimulus that causes disrupted responses. 

    As a stimulus, stress is viewed as an external force similar to the engineering use of the term to represent dynamics of strain in metals or an external force directed at a physical object. Defined in this way stress becomes the independent variable in research studies. The most frequently cited example of a stimulus-based theory is the life event theory proposed by TH Holmes and Rahe (1967). 

    Stress is operationalized as a stable additive phenomenon that is measurable by researcher selected life events or life changes that typically have preassigned normative weights. The primary theoretical proposition of the stimulus based orientation is that too many life events or changes increase vulnerability to illness. Results of studies (Lyon & Werner, 1983) using the life event perspective have failed to explain illness, accounting for only 2% to 4% of the incidence of illness. 

    Noting the limitations of the stimulus based orientation yet recognizing the need to attend the “initiator” of a stress experience, Werner (1993) proposed a useful classification of stressors that includes dimensions of locus, duration, temporality, forecasting, tone, and impact. .

    The third way to conceptualize stress is a transaction between person and environment. In this context stress refers to uncomfortable tension related emotions that arise when demanding situations tax available resources, and some kind of harm, loss, or negative consequence is anticipated (Lazarus, 1966; Lazarus & Folkman, 1984). 

    As a special note, the Lazarus (1966) reference represents a class work in demonstrating how theory informs research and then how research in turn shapes and reshapes theory. In the transactional orientation, stress represents a composite of experiences, including threatening appraisals, stress emotions (anxiety, fear, anger, guilt, depression), and coping responses. 

    As such, the term “stress” has heuristic value but is a difficult construct to study. Use of a transactional theoretical orientation requires that the researcher clearly delineate which aspects of the person environment transaction are to be studied (Lazarus; Lazarus & Folkman). 

    Commonly, the independent variables in experimental and quasi experimental studies based on the transactional orientation are personal resources such as self-esteem, perceived control, uncertainty, social support, and hardiness. 

    Appraisal of threat versus appraisal of challenge is commonly studied as a mediating factor between resource strength and coping responses. Dependent variables often include somatic outcomes such as pain, emotional disturbances such as anxiety and depression, and well-being. The transactional model was deemed by Lyon and Werner (1987) to be compatible with nursing's philosophical suppositions.

    Lyon and Werner (1987) published a critical review of 82 studies conducted by nurses from 1974 to 1984. The studies reviewed fell evenly across the three different theoretical orientations, and approximately 25% of the studies were atheoretical in nature. 

    In 1993, Barn father and Lyon edited a monograph of the proceedings of a synthesis conference on stress and coping held in conjunction with the Midwest Nursing Research Society. 

    This critical review of the research covered 296 studies published from 1980 to 1990. Both the 1987 and 1993 critical reviews noted a disturbing absence of research programs, making it difficult to identify what we have learned from the discipline's research efforts. 

    A compilation of critical reviews of the nursing research literature from 1991-1995 focused on stressors and health outcomes, stressors and chronic conditions, coping, resources, and appraisal and perception; the influence of nursing interventions on the stress health outcome linkage consistently noted the increase in well designed studies (Werner & Frost, 2000). 

    Each of these critical reviews noted knowledge gained and gaps in knowledge to guide future research.

     In the landmark Handbook of Stress, Coping, and Health: Implications for Nursing Research, Theory and Practice (Rice, 2000), the evolution of the efforts of nurse researchers to test various theoretical models of stress, coping, and health is critically reviewed. Importantly the handbook includes critical reviews of developing programs of nursing research.

    It is clear from all of the aforementioned critical reviews that our knowledge of how stress affects health is evolving. The significance of nursing research in the area of stress grows even more important in the era of escalating costs for health care services. 

    It is widely recognized that as many as 65% of visits to physician offices are for illnesses that have no discernible medical cause, and many of those illnesses are thought to be stress related. Furthermore, productivity in the workplace is thought to be greatly affected by the deleterious effects of stress. 

    Future directions for nursing research in the area of stress will focus on:

(a) effects of psychological stress on the somatic sense of self, functional ability, the experience of illness, and aberrant behaviors such as abuse and use of alcohol and drugs.

(b) the identification of patterns of variables that predict vulnerability or at-risk status for stress related illness experiences and aberrant behaviors.

(c) intervention studies to evaluate the effects of various stress prevention and stress management strategies including cognitive restructuring, guided imagery, desensitization, meditation on stress related illnesses, and aberrant behaviors.Read More

Post a Comment

0Comments

Give your opinion if have any.

Post a Comment (0)

#buttons=(Ok, Go it!) #days=(20)

Our website uses cookies to enhance your experience. Check Now
Ok, Go it!