miércoles, 10 de junio de 2015

Making Health Care Safer II | Agency for Healthcare Research & Quality (AHRQ)

Making Health Care Safer II | Agency for Healthcare Research & Quality (AHRQ)



AHRQ--Agency for Healthcare Research and Quality: Advancing Excellence in Health Care

Making Health Care Safer II

An Updated Critical Analysis of the Evidence for Patient Safety Practices

The full report and this summary are available at http://www.ahrq.gov/research/findings/evidence-based-reports/ptsafetyuptp.html.

Select for the print version of this summary (PDF File, 485 KB; Plugin Software Help).

 Background / Objectives / Framework / Methods / Results / Discussion / Limitations / Future Research Needs /References / Full Report


Background

The 1999 Institute of Medicine report "To Err is Human: Building a Safer Health System," is credited by many with launching the modern patient safety movement.1 A year after this report was published, as part of its initial portfolio of patient safety activities, the Agency for Healthcare Research and Quality (AHRQ) commissioned a group from the University of California, San Francisco-Stanford Evidence-based Practice Center (EPC) to analyze evidence behind a diverse group of patient safety practices (PSPs) that existed at that time.

The resulting 2001 report, "Making Health Care Safer: A Critical Analysis of Patient Safety Practices,"2  hereafter referred to as "Making Health Care Safer," was both influential and controversial. A significant number of copies of the report were distributed by AHRQ, and it became a cornerstone of other efforts (such as the National Quality Forum's 34 "Safe Practices for Better Healthcare" list)3 to rank safety practices by strength of evidence. However, the low rankings given to some popular safety practices, such as computerized order entry, raised fundamental questions about the role of evidence-based medicine in quality and safety practices.

Since the "Making Health Care Safer" report was published, the safety field has matured. Regulators and accreditors encourage health care organizations to adopt "safe practices" and to avoid adverse events that are considered wholly or largely preventable. A significant amount of money and person-hours have been invested in efforts to improve safety, and almost all health-care delivery organizations regard safety as a primary strategic priority.

However, evidence indicates that progress has not matched the efforts and investment. Some patient safety practices (PSPs) have resulted in unintended consequences, whereas others have been shown to be highly context dependent, working effectively in a research setting but failing during broader implementation. In the past 2 years, three studies have found high rates of preventable harm in hospitals,4-6 one of which found no improvement in adverse event rates from 2003 to 2008.

Against this backdrop, AHRQ commissioned an updated research report on the state of PSPs. Because many of the project team members and much of the methodology were drawn from the initial "Making Health Care Safer" project, and because most of the relevant practices were reviewed then, we see this report as a natural sequel to the 2001 report. However, because of the burgeoning literature relevant to patient safety and the limits of budget and time, we chose to examine a subset of PSPs (chosen through methods described below). Moreover, part of the maturation of the safety field has included a deeper appreciation of the importance of context in patient safety practices, a topic examined by our research team in the 2010 report, "Assessing the Evidence for Context-Sensitive Effectiveness and Safety of Patient Safety Practices: Developing Criteria," hereafter referred to as "Context Sensitivity."7 Accordingly, this report emphasizes matters of context and generalizability, as well as unintended consequences, to a greater degree than the 2001 "Making Health Care Safer" report.

Objectives

The goal of this project was to conduct a systematic literature review evaluating the evidence for a large number of patient safety practices.

Analytic Framework

For this report, we adopted the definition of a PSP used in the 2001 "Making Health Care Safer" report:

A Patient Safety Practice is a type of process or structure whose application reduces the probability of adverse events resulting from exposure to the health care system across a range of diseases and procedures.
The framework for considering the evidence regarding a PSP was worked out as part of the report on "Context Sensitivity."7 One of the principal challenges in the review of PSPs has been addressing the question of what constitutes evidence for PSPs. Many practices intended to improve quality and safety are complex sociotechnical interventions whose targets may be entire health care organizations or groups of providers, and these interventions may be targeted at rare events. To address the challenge regarding what constitutes evidence, we recognize that PSPs must be evaluated along two dimensions: the evidence regarding the outcomes of the safe practices, and the contextual factors influencing the practices' use and effectiveness.

These dimensions are represented in Figure A, which depicts a sample PSP that consists of a bundle of components (the individual boxes), and the context within which the PSP is embedded. Important evaluation questions, as depicted on the right in the figure, include effectiveness and harms, implementation, and adoption and spread. We then apply criteria to evaluate the four factors that together constitute quality (depicted as puzzle pieces in the bottom half of the figure. They include:

  1. Constructs about the PSP, its components, context factors, outcomes, and ways to accurately measure these constructs.
  2. Logic model or conceptual framework about the expected relationships among these constructs.
  3. Internal validity to assess the PSP results in a particular setting.
  4. External validity to assess the likelihood of being able to garner the same results in another setting.
We then synthesize this information into an evaluation of the strength of the evidence for a particular PSP.

Figure A. Framework for evidence assessment of patient safety practices

The principal results of the "Context Sensitivity" report included the following key points.

  • Whereas controlled trials of PSP implementations offer investigators greater control of sources of systematic error than do observational studies, trials often are not feasible in terms of time or resources. Also, controlled trials are often not possible for PSPs requiring large-scale organizational change or PSPs targeted at very rare events. Furthermore, the standardization imposed by the clinical trial paradigm may stifle the adaptive responses necessary for some quality improvement or patient safety projects. Hence, researchers may need to use designs other than randomized controlled trials to develop strong evidence about the effectiveness of some PSPs.
  • Regardless of the study design chosen for an evaluation, components that are critical for evaluating a PSP in terms of how it worked in the study site, and whether it might work in other sites, include the following:
    • Explicit description of the theory for the chosen intervention components, and/or an explicit logic model for "why this PSP should work".
    • Description of the PSP in sufficient detail that it can be replicated, including the expected change in staff roles.
    • Measurement of contexts.
    • Explanation, in detail, of the implementation process, the actual effects on staff roles, and changes over time in the implementation or the intervention.
    • Assessment of the impact of the PSP on outcomes and possible unexpected effects (including data on costs, when available).
    • For studies with multiple intervention sites, assessment of the influence of context on intervention and implementation effectiveness (processes and clinical outcomes).
  • High priority contexts for assessing any PSP implementation include measuring and  information for each of the following four domains:
    • Structural organizational characteristics (such as size, location, financial status, existing quality and safety infrastructure).
    • External factors (such as regulatory requirements, the presence in the external environment of payments or penalties such as pay-for-performance or public reporting, national patient safety campaigns or collaboratives, or local sentinel patient safety events).
    • Patient safety culture (not to be confused with the larger organizational culture), teamwork, and leadership at the level of the unit.
    • Availability of implementation and management tools (such as staff education and training, presence of dedicated time for training, use of internal audit-and-feedback, presence of internal or external individuals responsible for the implementation, or degree of local tailoring of any intervention).
These principles guided our search for evidence, and the way in which we presented our findings in this report.

Methods

We divided the project into three phases: topic refinement, the evidence review, and the critical review and interpretation of the evidence. The project team performed topic refinement and conducted the critical review of the evidence jointly with the Technical Expert Panel (TEP), which had also participated in the "Context Sensitivity" project. This TEP included many of the key patient safety leaders in the United States, Canada, and the United Kingdom: experts in specific PSPs and evaluation methods and persons charged with implementing PSPs in hospitals and clinics.

Topic Refinement

Because the goals of the project were to assess the evidence of the effectiveness of new safe practices and the evidence of implementation for current safe practices, most PSPs were eligible for this review. Thus, our first task was to refine the scope of the topic to fit within the timeframe and budget of the project, a task undertaken by the project team and the TEP. To accomplish this task, we created an initial list of 158 PSPs that we considered potentially eligible for inclusion. Through a process of internal team triage, group discussion with the TEP, and formal TEP votes, we narrowed the list to 41 PSPs for which a review of evidence was judged likely to be most helpful to providers, policymakers, and patients. However, this number of PSPs was still too large for us to review the evidence comprehensively within the timeframe. For that reason, we asked our TEP whether "breadth" or "depth" was likely to be more valuable for stakeholders; in other words, we asked whether the review should focus on fewer topics in more detail or cover all topics but with less detail. Our TEP recommended a "hybrid" approach, in which some topics would be reviewed in depth, whereas other topics would receive only a "brief review."

Topics could be considered as needing only a "brief review" for several reasons: the PSP is already well established; stakeholders need to know only "what's new" since the last time a topic was reviewed in depth; new evidence suggests the PSP may not be as effective as originally believed, so it is no longer a priority PSP; or the PSP is emerging with little evidence accumulated. We ultimately ended up with 18 in-depth reviews and 23 brief reviews.

Evidence Reviews

In-Depth Reviews

Overall approach. For many of the 18 topics designated to receive an in-depth review, a systematic review was likely to exist. Thus, a search to identify existing systematic reviews was usually the project team's first step. To assess the potential utility of such reviews, we followed the procedures proposed by Whitlock and colleagues,8 which essentially meant addressing the following two questions:

  1. Is the existing review sufficiently "on topic" to be of use?
  2. Is the existing review of sufficient quality for us to have confidence in the results?
If an existing systematic review was judged to be sufficiently "on topic" and of acceptable quality, we took one of two steps. We either performed an "update" search; that is, we searched databases for new evidence published since the end date of the search in the existing systematic review. Or, we conducted a search for "signals for updating." Such searches generally followed the criteria proposed by Shojania and colleagues.9 The searches involved a search of high-yield databases and journals for "pivotal studies" that could be a signal that a systematic review is out-of-date. Any evidence identified via the update search or the "signals" search was added to the evidence base from the existing systematic review.

Some PSPs had no existing systematic reviews, while other PSPs had prior reviews that were either not sufficiently relevant or were not of sufficient quality to be used. In those situations, we conducted new searches using guidance as outlined in AHRQ's "Methods Guide for Effectiveness and Comparative Effectiveness Reviews."10

Evidence about context, implementation, and adoption are key aspects of this review. We searched for evidence on these topics in two ways:

  • We looked for and extracted data about contexts and implementation from the articles contributing to the evidence of effectiveness.
  • We identified "implementation studies" from our literature searches. "Implementation studies" focus on the implementation process, particularly the elements demonstrated or believed to be of special importance for the success, or lack of success, of the intervention. To be eligible, implementation studies needed to either report or be linked to reports of effectiveness outcomes.
Reporting format. We took the format for in-depth reviews from AHRQ's "Context Sensitivity" report. Table A outlines the format of the in-depth reviews.

Brief Reviews

Brief reviews are not full systematic reviews. The goals of the brief reviews covered in this report varied by PSP according to the needs of stakeholders. The assessment could focus on either information about effectiveness of an emerging PSP or implementation of an established PSP; alternatively, the review could explore whether new evidence calls into question the effectiveness of an existing PSP. Thus, the methods for the brief reviews differed by topic. However, in general, brief reviews were conducted by a content expert who worked with the project team. The brief reviews involved focused literature searches for evidence relevant to the specific need. The evidence was then narratively summarized in a format that also varied with the particular goal.

Table A. Format for in-depth reviews

How important is the problem?
This section briefly sketches the nature of the target for the Patient Safety Practice.
What is the Patient Safety Practice?
This section describes the practice or practices proposed and evaluated.
Why should this Patient Safety Practice work?
This section describes what has been written about the basis for a proposed Patient Safety Practice, such as an underlying theory, a logic model for how it should work, or prior data.
What are the beneficial effects of the Patient Safety Practice?
This section provides the review of the evidence of effectiveness, and is the section most similar to traditional Evidence-based Practice Center reports.
What are the harms of the Patient Safety Practice?
This section contains the evidence of harms. Unlike reviews of most clinical interventions, evaluating potential harms is not a routine part of Patient Safety Practice evaluations. Thus, for most topics, this section is underdeveloped.
How has the Patient Safety Practice been implemented, and in what contexts?
This section describes what has been reported about how to implement the Patient Safety Practice and the range of institutions or contexts of where it has been implemented. When there is sufficient evidence, implementation studies are evaluated qualitatively for themes regarding effective implementation.
Are there any data about costs?
This section describes the evidence of costs of implementing the Patient Safety Practice, or, in some cases, cost-effectiveness analyses that have been performed.
Are there any data about the effect of context on effectiveness?
This section describes the evidence about whether or not the Patient Safety Practice has been shown to have differential effectiveness in different contexts. The "Context Sensitivity" project defined important contexts for Patient Safety Practices in four domains: external factors (e.g., financial or performance incentives or Patient Safety Practice regulations); structural organizational characteristics (e.g., size, organizational complexity, or financial status); safety culture, teamwork, and leadership involvement; and availability of implementation and management tools (e.g., organizational training incentives).11

Evidence Summary

We judged that users of this report would want a summary of the evidence for each topic. Such summary messages may facilitate an uptake of the findings. The project team developed the following summary domains with input from the TEP.

Scope of the problem. In general, we addressed two issues: the frequency of the safety problem, and the severity of each average event. For benchmarks, we regarded safety problems that occur approximately once per 100 hospitalizations as "common;" examples include falls, venous thromboembolism (VTE), potential adverse drug events, or pressure ulcers. In contrast, events an order of magnitude or more lower in frequency were considered "rare;" such events include inpatient suicide, wrong-site surgery, and surgical items being left inside a patient. The scope must also consider the severity of each event; for instance, most falls do not result in injury, and most potential adverse drug events do not result in clinical harm. However, each case of inpatient suicide or wrong-site surgery is devastating.

Strength of evidence for effectiveness. This assessment follows a framework for strength of evidence that the project team adapted from existing EPC Methods guidance12 to increase the relevance to patient safety practices. This means we included in strength of evidence assessments evidence about context, implementation, and the use of theory or logic models, in addition to standard EPC criteria on inconsistency, in precision, and the possibility of reporting bias.

Evidence on potential for harmful unintended consequences. Most PSP evaluators have not explicitly assessed the possibility of harm. Consequently, this domain includes evidence of both actual harm and the potential for harm. The ratings on known or potential harms ranged from high risk of harm to low (or negligible) risk of harm; in some cases, the evidence was too sparse to provide a rating.

Estimate of costs. This domain is speculative, because most evaluations do not present cost data. However, we believed that providing at least a rough estimate of cost would be beneficial information to include in this report. Therefore, we used the following categories and benchmarks to provide a rough estimate of cost, noting, where necessary, the factors that might cause cost estimates to vary.

  • Low cost. PSPs that do not require hiring new staff or large capital outlays but instead involve training existing staff and purchasing some supplies. Examples include most fall prevention programs, VTE prophylaxis, and medical history abbreviations designated as "Do Not Use.".
  • Medium cost. PSPs that might require hiring one or a few new staff members, have modest capital outlays, or incur ongoing monitoring costs. Examples include some fall prevention programs, many clinical pharmacist interventions, and participation in the American College of Surgeons outcomes reporting system ($135,000/year).13
  • High cost. PSPs that require hiring substantial numbers of new staff, have considerable capital outlays, or both. Examples include computerized order entry (because it requires an electronic health record), having to hire many nurses to achieve a certain nurse-to-patient ratio, or facility-wide infection control procedures (estimated at $600,000 year for a single intensive care unit).14
Implementation issues. This section summarizes how much we know about how to implement the PSP and how difficult it is to implement. To approach the question of how much we know, we considered the available evidence about implementation, the existence of data about the effect and influence of context, the degree to which a PSP has been implemented, and the presence of implementation tools, such as written materials and training manuals.

For the question of implementation difficulty, we used three categories: difficult, for PSPs that require large scale organizational change; not difficult, for PSPs that require protocols for drugs or devices, such as those needed to reduce radiation exposure or to help prevent stress-related gastrointestinal bleeding; and moderate, for PSPs falling between the extremes.

Critical Review and Interpretation of Evidence

The TEP reviewed the results of the evidence review performed by the project team both in a written draft document and at a face-to-face meeting in January 2012. One outcome of this review was a set of recommendations about priorities for PSP adoption.

Results

We completed 18 in-depth reviews and 23 brief reviews. Table B summarizes the findings according to the five main issues previously described (scope, strength of evidence, harms, costs, and implementation). The table is organized into two main sections: PSPs aimed at a specific (single) patient safety target, such as adverse drug events, or general clinical topics, such as preventing pressure ulcers; and PSPs designed to improve the overall system or to address multiple patient safety targets, such as nurse-staffing ratios or computerized provider order entry. In some cases, the text in the PSP column differs slightly from the chapter heading for that PSP. This is due to the desire by our TEP to include the target safety problem in the table (if targeted at a specific safety problem), more specification, or an example of the PSP (e.g., adding "such as a centralized display of consolidated data" to the PSP designated as "operating room integration and display systems").

Discussion

Since the 2001 report, "Making Health Care Safer," a vast amount of new information on PSPs has emerged. Compared with a decade ago, more agreement is now evident on what constitutes evidence of effectiveness and the importance of implementation and context. In this review, we determined that the strength of evidence was at least moderate for 20 PSPs, or about half of those reviewed. For 26 of the PSPs, we judged that evidence of at least moderate strength was available on how to implement them.

Thus, sufficient evidence exists about effectiveness and implementation to permit our TEP members to conclude that some PSPs are ready to be "strongly encouraged" for adoption by health care providers. Their assessments were based explicitly on the combination of the available evidence with their expert judgment in interpreting the evidence. The 10 "strongly encouraged" PSPs are listed in Table C.

Table C. Strongly encouraged patient safety practices

  • Preoperative checklists and anesthesia checklists to prevent operative and post-operative events.
  • Bundles that include checklists to prevent central line-associated bloodstream infections.
  • Interventions to reduce urinary catheter use, including catheter reminders, stop orders, or nurse-initiated removal protocols.
  • Bundles that include head-of-bed elevation, sedation vacations, oral care with chlorhexidine, and subglottic-suctioning endotracheal tubes to prevent ventilator-associated pneumonia.
  • Hand hygiene.
  • "Do Not Use" list for hazardous abbreviations.
  • Multicomponent interventions to reduce pressure ulcers.
  • Barrier precautions to prevent healthcare-associated infections.
  • Use of real-time ultrasound for central line placement.
  • Interventions to improve prophylaxis for venous thromboembolisms.
The TEP members concluded that several other PSPs had sufficient evidence of effectiveness and implementation, and that they should be "encouraged" for adoption. The 12 "encouraged" PSPs are listed in Table D.

Table D. Encouraged patient safety practices

  • Multicomponent interventions to reduce falls.
  • Use of clinical pharmacists to reduce adverse drug events.
  • Documentation of patient preferences for life-sustaining treatment.
  • Obtaining informed consent to improve patients' understanding of the potential risks of procedures.
  • Team training.
  • Medication reconciliation.
  • Practices to reduce radiation exposure from fluoroscopy and computed tomography scans.
  • Use of surgical outcome measurements and report cards, like the American College of Surgeons National Surgical Quality Improvement Program.
  • Rapid response systems.
  • Utilization of complementary methods for detecting adverse events/medical errors to monitor for patient safety problems.
  • Computerized provider order entry.
  • Use of simulation exercises in patient safety efforts.
The 22 PSPs in Tables C and D represent practices that health care providers can consider for adoption now. This recommendation particularly applies to the 10 "strongly encouraged" practices. For these practices, at least in the judgment of our TEP, there is sufficient knowledge to implement them, and that doing so will likely result in safer care. Future evaluations will likely further the knowledge of how best to implement the practices to make them most effective. However, in the meantime, our TEP believes that providers should not delay their consideration of adopting these practices, as enough is known now to permit health care systems to move forward.

Limitations

Because of limited resources and time, the current report does not cover the entire patient safety field, which has grown exponentially since the last report, both in the number of potential PSPs and in the amount of data about individual PSPs. For that reason, we used an explicit and transparent process to select which PSPs to evaluate, and our final list of 41 (from the more than 150 candidates) included the PSPs we felt were of highest priority to policymakers and providers.

Secondly, we did not perform in-depth reviews for all 41 PSPs. To maximize use of the available time and resources, we tailored our methods to the needs of our stakeholders. In particular, we targeted the 18 PSPs that were of the greatest interest to our stakeholders, or for which we likely had the most new information for in-depth reviews. The remaining 23 PSPs received brief reviews. It is important to note that the decisions about which PSPs would receive which level of scrutiny and analysis were made by a broadly representative stakeholder committee.

Thirdly, the in-depth reviews, although thorough, did not conform to all of the criteria for conducting an evidence review as presented in the Institute of Medicine's report, "Finding What Works in Health Care: Standards for Systematic Reviews,"15 or to all the criteria in AHRQ's "Methods Guide for Effectiveness and Comparative Effectiveness Reviews"9; for example, we did not publicly post a protocol for each of the individual reviews. We used our collective experience as EPC team members to adapt existing EPC methods that best preserved the essence of a systematic review, while allowing for the completion of 18 in-depth reviews within 9 months and within the available budget.

Additionally, over time, we will likely improve our methods for assessing evidence regarding how patient safety interventions affect health care processes and outcomes. The methods we used for this report incorporate new perspectives regarding the importance of implementation and context, which was the focus of the "Context Sensitivity" report; likewise, in the future, we expect to increase our understanding of the interactions between multiple intervention, implementation, and organizational variables and how the variables influence safety outcomes. If future research reveals that these variables interact in ways that our current understanding of theory and logic models cannot explain, we will need to modify the methods for evaluating PSPs again.

Lastly, we relied on the judgment of our TEP at every important step of the project. Therefore, the results are as much a product of these judgments as are our systematic review methods. Hence, our results might be sensitive to the selection of particular experts on our TEP. However, we mitigated this potential bias by including more than double the number of experts on our TEP as we typically would for an EPC review, which allowed us to include a diverse set of stakeholders from the U.S., Canada, and the United Kingdom. Stakeholders included PSP developers and evaluators, patient safety policymakers, and experts in design and evaluation methods. Rather than regarding the tight linkage between the needs of the stakeholders and the work of the EPCs as a limitation, we view it as a strength that increases the likelihood that the results of the review will be meaningful to providers, payors, and patients, and that the report's results will lead to meaningful change.

Future Research Needs

Despite over a decade of effort, there is little evidence that patient outcomes (broadly measured) have significantly improved, yet there has been some success (generally in efforts to reduce one type of harm, usually using one method of improvement). For example, efforts have focused on reducing blood stream infections, improving teamwork, or enhancing patient engagement.

If health care is to make significant improvements in patient safety, research should inform and guide these efforts. We have learned much about how to improve safety, yet we need to learn much more. Acquiring this knowledge will require investments in patient safety research, including investing in "basic" methodological research. To date, investments in patient safety research have fallen far short of the magnitude of the problem.

To achieve progress in improving patient safety, research is needed in a number of areas, including the following:

  • "Basic" patient safety research to develop new tools and measures, and research to ensure that the tool matches the problem.
  • A larger number of valid measures of patient safety.
  • Better methods to measure context and how an intervention was implemented.
  • Methods to identify and provide the necessary skills, resources, and accountability (e.g., a safety management infrastructure) at each level of the health care system.
  • More effective and less burdensome methods of improvement so that clinicians, researchers, and administrators can work on reducing all potential patient harms, rather than a select few.

References

1. Kohn L, Corrigan J, Donaldson M, eds. To Err is Human: Building a Safer Health System. Committee on Quality of Health Care in America, Institute of Medicine. Washington, DC: The National Academies Press; 2000.

2. Shojania KG, Duncan BW, McDonald KM, et al., eds. Making Health Care Safer: A Critical Analysis of Patient Safety Practices. Evidence Report/Technology Assessment No. 43. (Prepared by the University of California at San Francisco–Stanford Evidence-based Practice Center under Contract No. 290-97-0013.) AHRQ Publication No. 01-E058. Rockville, MD: Agency for Healthcare Research and Quality. July 2001. www.effectivehealthcare.ahrq.gov.

3. National Quality Forum. Safe Practices for Better Healthcare: 2010 Update.www.qualityforum.org/Publications/2010/04/ Safe_Practices_for_Better_Healthcare_%E2%80%93_2010_Update.aspx Link to Exit Disclaimer . Accessed December 13, 2011.

4. Classen DC, Resar R, Griffin F, et al. 'Global trigger tool' shows that adverse events in hospitals may be ten times greater than previously measured. Health Aff (Millwood) 2011;30(4):581-9. PMID: 21471476.

5. Landrigan CP, Parry GJ, Bones CB, et al. Temporal trends in rates of patient harm resulting from medical care. N Engl J Med 2010;363(22):2124-34. PMID: 21105794.

6. Levinson DR. Adverse Events in Hospitals: National Incidence Among Medicare Beneficiaries. OEI-06-09-00090. Office of Inspector General. Department of Health and Human Services. November 2010.

7. Shekelle P, Pronovost P, Wachter R, et al. Assessing the Evidence for Context-Sensitive Effectiveness and Safety of Patient Safety Practices: Developing Criteria. (Prepared by the Southern California-RAND Evidence-based Practice Center under Contract No. 290-2009-10001C). AHRQ Publication No. 11-0006-EF. Rockville, MD: Agency for Healthcare Research and Quality. December 2010. www.effectivehealthcare.ahrq.gov.

8. Whitlock EP, Lin JS, Chou R, et al. Using existing systematic reviews in complex systematic reviews. Ann Intern Med2008;148(10):776-82. PMID: 18490690.

9. Shojania KG, Sampson M, Ansari MT, et al. How quickly do systematic reviews go out of date? A survival analysis.Ann Intern Med 2007;147(4):224-33. PMID: 17638714.

10. Methods Guide for Effectiveness and Comparative Effectiveness Reviews. AHRQ Publication No. 10(12)-EHC063-EF. Rockville, MD: Agency for Healthcare Research and Quality. April 2012. www.effectivehealthcare.ahrq.gov.

11. Taylor SL, Dy S, Foy R, et al. What context features might be important determinants of the effectiveness of patient safety practice interventions? BMJ Qual Saf 2011;20(7):611-7. PMID: 21617166.

12. Owens DK, Lohr KN, Atkins D, et al. AHRQ series paper 5: grading the strength of a body of evidence when comparing medical interventions—agency for healthcare research and quality and the effective health-care program. J Clin Epidemiol 2010 May;63(5):513-23. PMID: 19595577.

13.  Maggard-Gibbons M. Chapter 14. Use of Report Cards and Outcome Measurements to Improve Safety of Surgical Care: American College of Surgeons National Quality Improvement Program in Making Health Care Safer II: An Updated Critical Analysis of the Evidence for Patient Safety Practices. Comparative Effectiveness Review No. 211. (Prepared by the Southern California-RAND Evidence-based Practice Center under Contract No. 290-2007-10062-I.) AHRQ Publication No. 13-E001-EF. Rockville, MD: Agency for Healthcare Research and Quality. March 2013.www.effectivehealthcare.ahrq.gov.

14. Shekelle PG. Chapter 34. Effect of Nurse-to-Patient Staffing Ratios on Patient Morbidity and Mortality in Making Health Care Safer II: An Updated Critical Analysis of the Evidence for Patient Safety Practices. Comparative Effectiveness Review No. 211. (Prepared by the Southern California-RAND Evidence-based Practice Center under Contract No. 290-2007-10062-I.) AHRQ Publication No. 13-E001-EF. Rockville, MD: Agency for Healthcare Research and Quality. March 2013. www.effectivehealthcare.ahrq.gov.

15. Committee on Standards for Systematic Reviews of Comparative Effectiveness Research, Institute of Medicine. Finding What Works in Health Care: Standards for Systematic Reviews. Washington, DC: The National Academies Press; 2000.

Full Report

This executive summary is part of the following document: Shekelle PG, Wachter RM, Pronovost PJ, Schoelles K, McDonald KM, Dy SM, Shojania K, Reston J, Berger Z, Johnsen B, Larkin JW, Lucas S, Martinez K, Motala A, Newberry SJ, Noble M, Pfoh E, Ranji SR, Rennke S, Schmidt E, Shanman R, Sullivan N, Sun F, Tipton K, Treadwell JR, Tsou A, Vaiana ME, Weaver SJ, Wilson R, Winters BD. Making Health Care Safer II: An Updated Critical Analysis of the Evidence for Patient Safety Practices. Evidence Report No. 211. (Prepared by the Southern California-RAND Evidence-based Practice Center under Contract No. 290-2007-10062-I.) AHRQ Publication No. 13-E001-EF. Rockville, MD: Agency for Healthcare Research and Quality. March 2013. http://www.ahrq.gov/research/findings/evidence-based-reports/ptsafetyuptp.html.
Page last reviewed March 2013
Internet Citation: Making Health Care Safer II: An Updated Critical Analysis of the Evidence for Patient Safety Practices. March 2013. Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/research/findings/evidence-based-reports/services/quality/ptsafetysum.html

No hay comentarios: