Geriatric Nursing and Measuring Performance Improving Quality

Afza.Malik GDA
0

Quality Improvement In Geriatric Nursing 

Geriatric Nursing and Measuring Performance Improving Quality

Key components of quality improving its challenges and strategies for improvement and characteristics.

Learning  Objective

1. Discuss key components of the definition of quality as outlined by the Institute of Medicine (IOM)

2. Describe three challenges of measuring quality of care

3. Delineate three strategies for addressing the challenges of measuring quality

4. List three characteristics of a good performance measure

Nursing Care and Effort for Quality Improvement 

    Nadzam and Abraham (2003) state that. “The main objective of implementing best practice protocols for geriatric nursing are to stimulate nurses to practice with greater knowledge and skill, and thus improve the quality of care to older adults”

    Although improved patient care and safety is certainly a goal, providers also need to be focused on the implementation of evidence based practice and on improving outcomes of care. 

    The implementation of evidenced based nursing practice as a means to providing safe, quality patient care, and positive outcomes is well supported in the literature. However, in order to ensure that protocols are implemented correctly, as is true with the delivery of all nursing care, it is essential to evaluate the care provided. 

    Outcomes of care are gaining increased attention and will be of particular interest to providers as the health care industry continues to move toward a “pay-for-performance (P4P)/value based purchasing (VBP)” reimbursement model.

Background Knowledge and Statement of Perform

    The improvement of care and clinical outcomes or, as it is commonly known as Performance Improvement requires a defined, organized approach. Improvement efforts are typically guided by the organization's Quality Assessment (measurement) and Performance Improvement (process improvement) model. 

    Some well known models or approaches for improving care and processes include Plan-Do-Study-Act (PDSA: Institute for Health Care Improvement, see Improvement Methods/Tools /Plan-Do-Study-Act%20 and Six Sigma. 

    These methodologies are simply an organized approach to defining improvement priorities, collecting data, analyzing the data, making sound recommendations for process improvement, implementing identified changes, and then reevaluating the measures. 

    Through Performance Improvement, standards of care (e.g, Nurses Improving Care for Health system Elders [NICHE] protocols, in this case) are identified, evaluated, analyzed for variances, and improved. The goal is to standardize and improve patient care and outcomes. 

    Restructuring, redesigning, and innovative processes aid in improving the quality of patient care. However, nursing professionals must be supported by a structure of continuous improvement that empowers nurses to make changes and delivers reliable outcomes (Johnson, Hallsey, Meredith, & Warden, 2006).

Does Organizational Changes Bring Improvement ?

    From the very beginning of the NICHE project in the early 1990s (Fulmer et al., 2002), the NICHE team struggled with the following questions: How can we measure whether the combination of models of care, staff education and development. and organizational change leads to improvements in patient care? 

    How can we provide hospitals and health systems that are committed to improving their nursing care to older adults with guidance and frameworks, let alone tools for measuring the quality of geriatric care? 

    In turn, these questions generated many other questions: Is it possible. to measure quality? Can we identify direct indicators of quality? Or do we have to rely on indirect indicators (eg, if 30-day re admissions of patients older than the age of 65 drop, can we reasonably state that this reflects an improvement in the quality of care)? 

    What factors may influence our desired quality outcomes, whether these are unrelated factors (eg, the pressure to reduce length of stay) or related factors (eg, the severity of illness)? How can we design evaluation programs that enable us to measure quality without adding more burden (of data collection, of taking time away from direct nursing care)? 

    No doubt, the results from evaluation programs should be useful at the “local” level. Would it be helpful, though, to have results that are comparable across clinical settings (within the same hospital or health system) and across institutions (eg, as quality benchmarking tools)? 

    Many of these questions remain unanswered today, although the focus on defining practice through an evidence-based approach is becoming the standard, for it is against a standard of care that we monitor and evaluate expected care. 

    Defining outcomes for internal and external reporting is expected, as is the improvement of processes required to deliver safe, affordable, and quality patient care.

Whats  is Quality of Care In Nursing 

     The concept of performance measures as the evaluation link between care delivery and quality improvement is introduced. 

    It also describes external comparative databases sponsored by Centers for Medicare & Medic-aid Services (CMS) and other quality improvement organizations. It concludes with a description of the challenge to selecting performance measures.

Principles of Evaluation

    It is important to reaffirm two key principles for the purposes of evaluating nursing care in this context. 

    First, at the management level, it is indispensable to measure the quality of geriatric nursing care; however, doing so must help those who actually provide care (nurses) and must impact on those who receive care (older adult patients). 

    Second, measuring quality of care is not the end goal; rather, it is done to enable the continuous use of quality-of-care information to improve patient care.

Assessment of the Problem Quality Health Care Defined

     It is not uncommon to begin a discussion of quality-related topics without reflecting on one's own values and beliefs surrounding quality health care. 

    Many have tried to define the concept; but like the old cliché “beauty is in the eye of the beholder,” so is our own perception of quality. Health care consumers and providers alike are often asked, “What does quality mean to you?” 

    The response typically varies and includes statements such as “a safe health care experience,” “receiving correct medications.” "receiving medications in a timely manner," "a pain-free procedure or postoperative experience," "compliance with regulation," "accessibility to services," "effectiveness of treatments and medications," "efficiency of services," "good communication among providers,” “information sharing,” and “a caring environment.” 

    These are important attributes to remember when discussing the provision of care with clients and patients.

Measure of Quality of Care

    The IOM defines quality of care as “the degree to which health services for individuals and populations increase[s] the likelihood of desired health outcomes and are consistent with current professional knowledge” (Kohn, Corrigan, & Donaldson, 2000, p. 211) . 

    Note that this definition does not tell us what quality is, but what quality should achieve. This definition also does not say that quality exists if certain conditions are met (eg, a ratio of x falls to y older orthopedic surgery patients, a 30-day readmission rate of 2). 

    Instead, it emphasizes that the likelihood of achieving desired levels of care is what matters. In other words, quality is not a matter of reaching something but, rather, the challenge, over and over, of improving the odds of reaching the desired level of outcomes. 

    Thus, the definition implies the cyclical and longitudinal nature of quality: What we achieve today must guide us as to what to do tomorrow-better and better, over and over. The focus being on improving processes while demonstrating sustained improvement.

    The IOM definition stresses the framework within which to conceptualize quality: knowledge. 

    The best knowledge to have been research evidence preferably from randomized clinical trials (experimental studies) yet without ignoring the relevance of less rigorous studies (non randomized studies, epidemiological investigations, descriptive studies, even case studies). 

    Realistically, in nursing, we have limited evidence to guide the care of older adults. Therefore, professional consensus among clinical and research experts is a critical factor in determining quality. 

    Furthermore, knowledge is needed at three levels: to achieve quality, we need to know what to do (knowledge about best practice), we need to know how to do it (knowledge about behavioral skills), and we need to know what outcomes to achieve (knowledge about best outcomes).

    The IOM definition of quality of care contains several other important elements. “Health services” focuses the definition on the care itself. Granted, the quality of care provided is determined by such factors as knowledgeable professionals, good technology, and efficient organizations; however, these are not the focus of quality measurement. 

    Rather, the definition implies a challenge to health care organizations: The system should be organized in such a way that knowledge-based care is provided and that its effects can be measured. This brings us to the "desired health outcomes" element of the definition. 

    Quality is not an attribute (as in “My hospital is in the top 100 hospitals in the United States as ranked by US News & World Report”), but an ability (as in “Only x% of our older adult surgical patients go into acute confusion; of those who do, y% return to normal cognitive function within z hours after on ser”). 

    In the IOM definition, degree implies that quality occurs on a continuum from unacceptable to excellent. The clinical consequences are on a continuum as well. If the care is of unacceptable quality, the likelihood that we will achieve the desired outcome is nil. 

    In fact, we probably will achieve outcomes that are the opposite of what are desired. As the care moves up the scale toward excellent, the more likely the desired outcomes will be achieved. Degree also implies quantification. 

    Although it helps to be able to talk to colleagues about, say, unacceptable, poor, average, good, or excellent care, these terms should be anchored by a measurement system. 

    Such systems enable us to interpret what, for instance, poor care is by providing us with a range of numbers that correspond to poor. In turn, these numbers can provide us with a reference point for improving care to the level of average: We measure care again, looking at whether the numbers have improved, then checking whether these numbers fall in the range defined as average. 

    Likewise, if we see a worsening of scores, we will be able to conclude whether we have gone from, say, good to average. Individuals and populations underscore that quality of care is reflected in the outcomes of one patient and in the outcomes of a set of patients. It focuses our attention on providing quality care to individuals while aiming to raise the level of care provided to populations of patients.

    In summary, the IOM definition of quality-of-care forces us to think about quality in relative and dynamic rather than in absolute and static terms. Quality of care is not a state of being but a process of becoming. 

    Quality is and should be measurable, using performance measures a quantitative tool that provides an indication of an organization's performance in relation to a specified process or outcome” (Schyve & Nadzam, 1998, p. 222).

    Quality improvement is a process of attaining ever better levels of care in parallel with advances in knowledge and technology. It strives toward increasing the likelihood that certain outcomes will be achieved. 

    This is the professional responsibility of those who are charged with providing care (clinicians, managers, and their organizations). On the other hand, consumers of health care (not only patients but also purchasers, payers, regulators, and accreditors) are much less concerned with the processes in place, as with the results of those processes.

Clinical Outcomes and Publicly Reported Quality Measures

    Although it is important to evaluate clinical practices and processes, it is equally important to evaluate and improve outcomes of care. Clinical outcome indicators are receiving unprecedented attention within the health care industry from providers, payors, and consumers alike. 

    Regulatory and accrediting bodies review outcome indicators to evaluate the care provided by the organization prior to and during regulatory and accrediting surveys, and to evaluate clinical and related processes. 

    Organizations are expected to use outcome data to identify and prioritize the processes that support clinical care and demonstrate an attempt to improve performance. Providers may use outcomes data to support best practices by bench marking their results with similar organizations. 

    The bench marking process is supported through publicly reported outcomes data at the national and state levels. National reporting occurs on the CMS website, where consumers and providers alike may access information and compare hospitals, home-care agencies, nursing homes, and managed care plans. 

    Home Health Compare list outcome indicators relative to the specific service or delivery model. Consumers may use those websites to select organizations and compare outcomes, one against another, to aid in their selection of a facility or service. 

    These websites also serve as a resource for providers to benchmark their outcomes against those of another organization. Outcomes data also become increasingly important to providers as the industry shifts toward a P4P/ VBP model.

    In a P4P/VBP model, practitioners are reimbursed for achieved quality-of-care outcomes. Currently, the CMS has several P4P initiatives and demonstration projects for a detailed overview) is part of the US Department of Health and Human Services' broader national quality initiative that focuses on an initial set of 10 quality measures by linking reporting of those measures to the payments the hospitals receive for each discharge. 

    The purpose of the Premier Hospital Quality Incentive Demonstration  was to have improved the quality of inpatient care for Medicare beneficiaries by giving financial incentives to almost 300 hospitals for high quality. 

    The Physician Group Practice Demonstration, mandated by the Medicare, Medicaid, and State Children's Health Insurance Program (SCHIP) Benefits Improvement and Protection Act of 2000 (BIPA), is the first P4P initiative for physicians under the Medicare program. 

    The Medicare Care Management Performance Demonstration (Medicare Modernization Act [MMA)] section 649), modeled on the “bridges to excellence” program, is a 3-year P4P demonstration with physicians to promote the adoption and use of health information technology to improve the Quality of patient care for chronically ill Medicare patients. 

    The Medicare Health Care Quality Demonstration, mandated by section 646 of the MMA, is a 5-year demonstration program under which projects enhance quality by improving patient safety, reducing variations in utilization by appropriate use of evidence-based care and best practice guidelines, encourage shared decision-making, and using culturally and ethnically appropriate care.

Interventions and Care Strategies
Measuring quality of care

    Schyve and Nadzam (1998) identified several challenges to measuring quality. First, the suggestion that quality of care is in the eye of the beholder points to the different interests of multiple users. This issue encompasses both measurement and communication challenges. 

    Measurement and analysis methods must generate information about the quality of care that meets the needs of different stakeholders. In addition, the results must be communicated in ways that meet these different needs. Second, we must have good and generally accepted tools for measuring quality. 

    Thus, user groups must come together in their conceptualization of quality care so that relevant health care measures can be identified and standardized. A common language of measurement must be developed, grounded in a shared perspective on quality that is cohesive across, yet meets the needs of various user groups. 

    Third, once the measurement systems are in place, data must be collected. This translates into resource demands and logistic issues as to who is to report, record, collect, and manage data. Fourth, data must be analyzed in statistically appropriate ways. This is not just a matter of using the right statistical methods. 

    More important, user groups must agree on a framework for analyzing quality data to interpret the results. Fifth, health care environments are complex and dynamic in nature. There are differences across health care environments, between types of provider organizations, and within organizations. 

    Furthermore, changes in health care occur frequently such as the movement of care from one setting to another and the introduction of new technology. Finding common denominators is a major challenge.

Challenges

    These challenges are not insurmountable. However, making a commitment to quality care entails a commitment to putting the processes and systems in place to measure quality through performance measures and to report quality-of-care results. 

    This commitment applies as much to a quality improvement initiative on a nursing unit as it does to a corporate commitment by a large health care system. In other words, once an organization decides to pursue excellence (ie quality), it must accept the need to overcome the various challenges to measurement and reporting. Let us examine how this could be done in a clinical setting.

    McGlynn and Asch (1998) offer several strategies for addressing the challenges to measuring quality. First, the various user groups must identify and balance competing perspectives. 

    This is a process of giving and taking: not only proposing highly clinical measures (eg, prevalence of pressure ulcers) but also providing more general data (eg use of restraints). 

    It is a process of asking and responding: not only asking management for monthly statistics on medication errors but also agreeing to provide management with the necessary documentation of the reasons stated for restraint use. Second, there must be an accountability framework. 

    Committing to quality care implies that nurses. Several responsibilities and are willing to assume to be held accountable for each of them: 

(a) providing the best possible care to older patients

(b) examining their own geriatric nursing knowledge and practice

(c) seeking ways to improve it

(d ) agreeing to evaluation of their practics

(e) responding to needs for improvement. 

    Third, there must be objectivity in the evaluation of quality. This requires setting and adopting explicit criteria for judging performance, then building the evaluation process on these criteria. 

  Nurses, their colleagues, and their managers need to reach consensus on how performance will be measured and what will be considered excellent (and good, average, etc.) performance. 

    Fourth, once these indicators have been identified, nurses need to select a subset of indicators for routine reporting. Indicators should give a reliable snapshot of the team's care to older patients. 

    Fifth, it is critical to separate as much as possible the use of indicators for evaluating patient care and the use of these indicators for financial or non-financial incentives. 

    Should the team be cost conscious? Yes, but cost should not influence any clinical judgment as to what is best for patients. Finally, nurses in the clinical setting must plan how to collect the data. 

    At the institutional level, this may be facilitated by information systems that allow performance measurement and reporting. 

    Ideally, point-of-care documentation will also provide the data necessary for a systematic and goal-directed quality-improvement program, thus, eliminating separate data abstraction and collection activities.

Achieving Improvement

    The success of a quality improvement program in geriatric nursing care (and the ability to overcome many of the challenges) hinges on the decision as to what to measure. 

    We know that good performance measures must be objective, that data collection must be easy and as burdenless as possible, that statistical analysis must be guided by principles and placed within a framework, and that communication of results must be targeted toward different user groups. 

    Conceivably, we could try to measure every possible aspect of care, realistically, however, the planning for this will never reach the implementation stage. 

    Instead, nurses need to establish priorities by asking these questions: Based on our clinical expertise, what is critical for us to know? What aspects of our care to older patients are high risk or high volume? 

    What parts of our elder care are problem-prone, either because we have experienced difficulties in the past or because we can anticipate problems caused by the lack of knowledge or resources? 

    What clinical indicators would be of interest to other user groups: patients, the general public, management, payors, accreditors, and practitioners? Throughout this prioritization process, nurses should keep asking themselves: What questions are we trying to answer, and for whom?

Selecting Quality Indicators

     The correct selection of performance measures or quality indicators is a crucial step in evaluating nursing care and is based on two important factors: frequency and volume. 

    Clearly, high-volume practices or frequent processes require focused attention-to ensure that the care is being delivered according to protocol or processes are functioning as designed. 

    Problem-prone or high risk processes would also warrant a review because these are processes with inherent risk to patients or variances in implementing the process. The selection of indicators must also be consistent with organizational goals for improvement. 

    This provides buy-in from practitioners as well as administration when reporting and identifying opportunities for improvement. Performance measures (indicators) must be based on either a standard of care, policy, procedure, or protocol. 

    These documents, or standards of care, define practice and expectations in the clinical setting and, therefore, determine the criteria for the monitoring tool. The measurement of these standards simply reflects adherence to or implementation of these standards. 

    Once it is decided what to measure, nurses in the clinical geriatric practice setting face the task of deciding how to measure performance. 

    There are two possibilities: either the appropriate measure (indicator) already exists or a new performance measure must be developed. Either way, there are a number of requirements of a good performance measure that will need to be applied.

    Although indicators used to monitor patient care and performance do not need to be subject to the rigors of research, it is imperative that they reflect some of the attributes necessary to make relevant statements about the care. 

    The measure and its output need to focus on improvement, not merely the description of something. It is not helpful to have a very accurate measure that just tells the status of a given dimension of practice. 

    Instead, the measure needs to inform us about current quality levels and relate them to previous and future quality levels. It needs to be able to compute improvements or declines in quality over time so that we can plan for the future. For example, to have a measure that only tells the number of medication errors in the past month would not be helpful. 

    Instead, a measure that tells what types of medication errors were made, perhaps even with a severity rating indicated, compares this to medication errors made during the previous months, and shows in numbers and graphs the changes over time that will enable us to do the necessary root-cause analysis to prevent more medication errors in the future. 

    Performance measures need to be clearly defined, including the terms used, the data elements collected, and the calculation steps employed. Establishing the definition prior to implementing the monitoring activity allows for precise data collection. 

    It also facilitates benchmarking with other organizations when the data elements are similarly defined and the data collection methodologies are consistent. Imagine that we want to monitor falls on the unit. 

    The initial questions would be as follows: What is considered a fall? Does the patient have to be on the floor? Does a patient slump against the wall or onto a table while trying to prevent himself or herself from falling to the floor constitute a fall? 

    Is a fall due to physical weakness or orthostatic hypotension treated the same as a fall caused by tripping over an obstacle? 

    The next question would be the following: Over what time are falls measured: a week, a fortnight, a month, a quarter, a year? The time frame is not a matter of convenience but of accuracy. 

    To be able to monitor falls accurately, we need to identify a time frame that will capture enough events to be meaningful and interpretable from a quality improvement point of view. External indicator definitions, such as those defined for use in the National Database of Nursing Quality Indicators, provide guidance for both the indicator definition as well as the data collection methodology for nursing-sensitive indicators. 

    The nursing-sensitive indicators reflect the structure, process, and outcomes of nursing care. The structure of nursing care is indicated by the supply of nursing staff, the skill level of the nursing staff, and the education/certification of nursing staff. 

    Process indicators measure aspects of nursing care such as assessment, intervention, and registered nurse (RN) job satisfaction. 

    Patient outcomes that are determined to be nursing sensitive are those that improve if there is a greater quantity or quality of nursing care (eg, pressure ulcers, falls, intravenous [IV] infiltrations) and are not considered “nursing-sensitive”

    Several nursing organizations across the country participate in data collection and submission, which allows for a robust database and excellent bench marking opportunities.

Additional Indicators or Attributes

    Additional indicator attributes include validity, sensitivity, and specificity. Validity refers to whether the measure “actually measures what it purports to measure” (Wilson, 1989). Sensitivity and specificity refer to the ability of the measure to capture all true cases of the event being measured, and only true cases. 

    We want to make sure that a performance measure identifies true cases as true, and false cases as false, and does not identify a true case as false or a false case as true. Sensitivity of a performance measure is the likelihood of a positive test when a condition is present. 

    Lack of sensitivity is expressed as false positives: The indicator calculates a condition as present when in fact it is not. Specificity refers to the likelihood of a negative test when a condition is not present. False-negatives reflect lack of specificity: The indicators calculate that a condition is not present when in fact it is.

    Depression in Older Adults, to use the Geriatric Depression Scale, in which a score of 11 or greater is indicative of depression. How robust is this cutoff score of 11? 

    What is the likelihood that someone with a score of 9 or 10 (ie negative for depression ) might actually be depressed (false-negative)? Similarly, what is the likelihood that a patient with a score of 13 would not be depressed (false positive)?

Reliability of Measures

    Reliability means that results are reproducible, the indicator measures the same attribute consistently across the same patients and across time. Reliability begins with a precise definition and specification, as described earlier. 

    A measure is reliable if different people calculate the same rate for the same patient sample. The core issue of reliability is measurement error, or the difference between the actual phenomenon and its measurement: The greater the difference, the less reliable the performance measure. 

    For example, suppose that we want to focus on pain management in older adults with end-stage cancer. One way of measuring pain would be to ask patients to rate their pain as none, a little, some, quite a bit, or a lot. 

    An alternative approach would be to administer a visual analogue scale, a 10-point line on which patients indicate their pain levels. Yet another approach would be to ask the pharmacy to produce monthly reports of analgesic use by type and dose. Generally speaking, the more subjective the scoring or measurement, the less reliable it will be. 

    If all these measures were of equal reliability, they would yield the same result. Concept of reliability, particularly inter-rate reliability, becomes increasingly important to consider in those situations where data collection is assigned to several staff members. 

    It is important to review the data collection methodology and the instrument in detail to avoid different approaches by the various people collecting the data.Several of the examples given earlier imply the criterion of interpretability. 

    A performance measure must be interpretable; that is, it must convey a result that can be linked to the quality of clinical care. First, the quantitative output of a performance measure must be scaled in such a way that users can interpret it. 

    For example, a scale that starts with 0 as the lowest possible level and ends with 100 is a lot easier to interpret than a scale that starts with 13,325 and has no upper boundary except infinity. Second, we should be able to place the number within a context. 

    Suppose we are working in a hemodialysis center that serves quite a large proportion of patients with end-stage renal disease (ESRD) and are older than the age of 60-the group least likely to be fit for a kidney transplant yet with several years of life expectancy remaining. 

    We know that virtually all patients with ESRD develop anemia (hemoglobin (Hb) level less than 11 g/dL), which in turn impacts on their activities of daily living (ADL) and independent activities of daily living (IADL) performance. 

    In collaboration with the nephrologists, we initiate a systematic program of anemia monitoring and management, relying in part on published best practice guidelines.We want to achieve the best practice guideline of 85% of all patients having Hb levels equal to or greater than 11 g/dL .

    We should be able to succeed because the central laboratory provides us with Hb levels, which allows us to calculate the percentage of patients at Hb of 11 g/dL or greater.

    The concept of risk-adjusted performance measures or outcome indicators is an important one. Some patients are sicker than others, some have more comorbidities, and some are older and frailer. No doubt, we could come up with many more risk variables that influence how patients respond to nursing care. 

    Good performance measures take this differential risk into consideration. They create a “level playing field” by adjusting quality indicators on the basis of the (risk for) severity of illness of the patients. It would not be fair to the health care team if the patients on the unit are a lot sicker than those on the unit a floor above. 

    The team is at greater risk for having lower quality outcomes, not because they provide inferior care, but because the patients are a lot sicker and are at greater risk for a compromised response to the care provided. 

    The sicker patients are more demanding in terms of care and ultimately are less likely to achieve the same outcomes as less ill patients. Performance measures must be easy to collect. 

    The many examples cited earlier also refer to the importance of using performance measures for which data are readily available, can be retrieved from existing sources or can be collected with little burden. 

    The goal is to gather good data quickly without running the risk of having “quick and dirty” data. We begin the process of deciding how to measure by reviewing existing measures. There is no need to reinvent the wheel, especially if good measures are out there. 

    Nurses should review the literature, check with national organizations, and consult with colleagues. Yet, we should not adopt existing measures blindly. Instead, we need to subject them to a thorough review using the characteristics identified previously. Also, health care organizations that have adopted these measures can offer their experience. 

    It may be that after an exhaustive search, we cannot find measures that meet the various requirements outlined previously. We decide instead to develop our own in-house measure. The following are some important guidelines:

    1. Zero in on the population to be measured. If we are measuring an undesirable event, we must determine the group at risk for experiencing that event, then limit the denominator population to that group. 

    If we are measuring a desirable event or process, we must identify the group that should experience the event or receive the process. Where do problems tend to occur? What variables of this problem are within our control? 

    If some are not within our control, how can we zero in even more on the target population? In other words, we exclude patients from the population when good reason exists to do so (eg those allergic to the medication being measured).

    2. Define terms. This is a painstaking but essential effort. It is better to measure 80% of an issue with 100% accuracy than 100% of an issue with 80% accuracy.

    3. Identify and define the data elements and allowable values required to calculate the measure This is another painstaking but essential effort. The 80/100 rule applies here, as well.

    4. Test the data collection process. Once we have a prototype of a measure ready, we must examine how easy or difficult it is to get all the required data.

 Implementing the Performance Improvement Program

    Successful Performance Improvement programs require an organizational commitment to implementation of the Performance Improvement processes and principles outlined in this chapter. 

    Consequently, this commitment requires a defined, organized approach that most organizations embrace and define in the form of a written plan. The plan outlines the approach the organization uses to improve care and safety for its patients. 

    There are several important elements that must be addressed in order to implement the Performance Improvement program effectively. The scope of service, which addresses the types of patients and care that is rendered, provides direction on the selection of performance measures. 

    An authority and responsibility statement in the document defines who is able to implement the quality program and make decisions that will affect its implementation. Finally, it is important to define the committee structure used to effectively analyze and communicate improvement efforts to the organization. 

    The success of the Performance Improvement program is highly dependent on a well-defined structure and appropriate selection of performance measures. The following is a list of issues that, if not addressed, may negatively impact the success of the quality program:

1. Lack of focus: a measure that tries to track too many criteria at the same time or is too complicated to administer, interpret, or use for quality monitoring and improvement

2. Wrong type of measure a measure that calculates indicators the wrong way (eg, uses rates when ratios are more appropriate; uses a continuous scale rather than a discrete scale; measures a process when the outcome is measurable and of greater interest) 

3. Unclear definitions: a measure that is too broad or too vague in its scope and definitions (e.g. population is too heterogeneous, no risk adjustment, unclear data elements, poorly defined values)

4. Too much work: a measure that requires too much clinician time to generate the data or too much manual chart abstraction

5. Reinventing the wheel: a measure that is a reinvention rather than an improvement of a performance measure

6. Events not under control: measure focuses on a process or outcome that is out of the organization (or the unit's) control to improve

7. Trying to do research rather than quality improvement: data collection and analysis are done for the sake of research rather than for improvement of nursing care and the health and well-being of the patients

8 Poor communication of results: the format of communication does not target and enable change

9. Uninterpretable and underused: uninterpretable results are of little relevance to improving geriatric nursing care

    In summary, the success of the Quality Assessment Performance Improvement Program's ability to measure, evaluate, and improve the quality of nursing care to health system elders is in the planning. 

    First, it is important to define the scope of services provided and those to be monitored and improved. 

    Second, identify performance measures that are reflective of the care provided. Indicators may be developed internally or may be obtained from external sources of outcomes and data collection methodologies. 

    Third, it is important to analyze the data, pulling together the right people to evaluate processes, make recommendations, and improve care. Finally, it is important to co -mmunicate findings across the organization and celebrate success.

Post a Comment

0Comments

Give your opinion if have any.

Post a Comment (0)

#buttons=(Ok, Go it!) #days=(20)

Our website uses cookies to enhance your experience. Check Now
Ok, Go it!