martes, 22 de abril de 2014

National Quality Measures Clearinghouse | Expert Commentaries: Prospects for Comparing Hospital Performance Internationally on the Basis of Quality and Safety

National Quality Measures Clearinghouse | Expert Commentaries: Prospects for Comparing Hospital Performance Internationally on the Basis of Quality and Safety



National Quality Measures Clearinghouse (NQMC)

April 21, 2014



Prospects for Comparing Hospital Performance Internationally on the Basis of Quality and Safety
By: Susan Burnett
A wealth of data comparing health systems internationally is available through the Organisation for Economic Cooperation and Development (OECD) database of key international indicators (1) and World Health Organization's (WHO's) Global Health Observatory. (2) International comparisons of heath indicators at the country level clearly prove important for policy makers, and a wide variety of statistics are available. However, if you are a health professional planning an international career move, a patient looking for the best hospital for treatment, or a manager looking for best practice around the world, then comparing hospitals on the basis of quality and safety becomes increasingly difficult.
Safety is especially challenging to measure and compare between hospitals. In our recent work exploring how to measure and monitor safety, we divided patient harm into six categories (3): treatment specific harm such as adverse drug reactions; harm due to overtreatment; general harm from healthcare such as infections; failure to provide appropriate treatment such as antibiotic prophylaxis; harm from delayed diagnosis; and psychological harm (feeling unsafe). Many hospitals will capture data on some of these types of harm, for example, infection rates and incident reports, but fewer, if any, will have information on safety events in each of these categories. Making comparisons between hospitals on patient safety is further complicated by the factor of safety culture: for example, does one hospital have fewer incident reports because it has a poor safety culture where no-one speaks up or because it has a comprehensive safety programme?
In a study across five European countries (4), we attempted to select hospitals for our research based on indicators of safety, patient experience, and clinical effectiveness. (5) The countries included were England, the Netherlands, Norway, Portugal and Sweden. The process started with a list of ten commonly used quality and safety process and outcomes indicators, those that are widely regarded in the medical field as good practice. We asked each research team whether the data were collected by all hospitals in their country; who had access to the data (for example was the data available to the public); how robust the data were; and what other indicators were available for consideration.
For patient safety, we proposed methicillin-resistant Staphylococcus aureus (MRSA) or Clostridium difficile (C-difficile) infection rates as a proxy for patient safety. In England these rates were publicly available for every hospital, in Portugal the rates were not available to the public, in Sweden and the Netherlands the rates were so low that the data was not considered helpful, and in Norway the data were not available.
For patient experience, we examined a selection of process measures, such as operating on hip fractures within 48 hours. These data were available in all countries; however, in Sweden the timeframe was within 18 hours and in Norway the data were only collected for people aged over 65 years. In a more detailed study in 10 European hospitals, we found more emphasis on measuring patient satisfaction than on measuring patient experience. (6)
For clinical effectiveness, we looked to use condition-specific mortality rates and a composite mortality rate. The composite mortality rate was not available for all hospitals in Sweden, and the specification used to calculate the rate was different in the Netherlands; therefore, comparing hospitals using one mortality rate was not possible. However, we were able to use condition specific mortality rates such as for stroke and abdominal aortic aneurisms (AAA) across hospitals in all five countries. Often these data were collected by medical societies for their own purposes but made available to hospitals and the public. This highlights other important factors affecting the quality of available data: which organisation collects the data, for what purpose, and how well that organisation is trusted to check the data and provide robust information.
Starting with a long list of potential indicators of quality and safety in hospitals, we found only four that we could usefully apply across all five countries, and even these had limitations as described. These indicators were: hip fractures treated in a set time; caesarean section rates; condition specific mortality rates; and surgical site infection rates. A further and important limitation to prospects for comparing hospital performance internationally was that none of these indicators applied to the whole hospital; rather, they applied to particular clinical areas such as orthopaedics or obstetrics. Nor could we use alternative methods to compare quality across hospitals internationally, such as quality regulation and accreditation, since the countries differed greatly in how these activities were conducted. They ranged from national regulation of every hospital in England, a developing accreditation system in Portugal, to voluntary systems in the Netherlands, and no formal regulation or accreditation systems in Norway and Sweden.
What are the prospects for comparing hospital performance internationally on the basis of quality and safety? We faced many difficulties in our study spanning five European countries. I have no doubt that these difficulties would be multiplied if our study were extended to include hospitals further afield, such as in the USA, Africa or Australasia. The current prospects, therefore, are not good. If we are to compare hospitals, there will need to be international agreement of a small set of well-defined indicators of quality and safety for which every hospital is mandated to collect data. The results would be verified through an external audit process and should be made available to the public. Agreement between the USA and some European countries would provide a great start to developing an international consensus. In the meantime, all health professionals, wherever they work, should be supported by their organisations to collect and use quality measures (7) for the purposes of learning and improving their services.

Authors
Susan Burnett
Centre for Patient Safety and Service Quality, Imperial College, London, UK
Disclaimer
The views and opinions expressed are those of the author and do not necessarily state or reflect those of the National Quality Measures Clearinghouse™ (NQMC), the Agency for Healthcare Research and Quality (AHRQ), or its contractor ECRI Institute.
Potential Conflicts of Interest
Ms. Burnett states no personal financial or family conflict of interest with respect to this expert commentary. Ms. Burnett reports the following business/professional interest: the European Union FP7 Research Funding Programme (Imperial College, London) funded this research; the grant is now closed.
References


  1. Statistics. [internet]. Paris (France): Organisation for Economic Cooperation and Development (OECD); [accessed 2014 Mar 3]. Available: www.oecd.org/statistics/ External Web Site Policy.
  2. Global Health Observatory. [Web site]. Geneva (Switzerland): World Health Organisation (WHO); [accessed 2014 Mar 3]. Available: www.who.int/gho/en/ External Web Site Policy.
  3. Vincent C, Burnett S, Carthey J. The measurement and monitoring of safety. London (UK): Health Foundation; 2013 Apr.
  4. Robert GB, Anderson JE, Burnett SJ, Aase K, Andersson-Gare B, Bal R, Calltorp J, Nunes F, Weggelaar AM, Vincent CA, Fulop NJ, QUASER team. A longitudinal, multi-level comparative study of quality and safety in European hospitals: the QUASER study protocol. BMC Health Serv Res. 2011 Oct 26;11(1):285.
  5. Burnett S, Renz A, Wiig S, Fernandes A, Weggelaar AM, Calltorp J, Anderson JE, Robert G, Vincent C, Fulop N. Prospects for comparing European hospitals in terms of quality and safety: lessons from a comparative study in five countries. Int J Qual Health Care. 2013 Feb;25(1):1-7.
  6. Wiig S, Storm M, Aase K, Gjestsen MT, Solheim M, Harthug S, Robert G, Fulop N, QUASER team. Investigating the use of patient involvement and patient experience in quality improvement in Norway: rhetoric or reality? BMC Health Serv Res. 2013 Jun 6;13:206.
  7. National Quality Measures Clearinghouse (NQMC). [Web site]. Rockville (MD): Agency for Healthcare Research and Quality; [accessed 2014 Mar 3]. Available: www.qualitymeasures.ahrq.gov External Web Site Policy.

No hay comentarios: