viernes, 9 de diciembre de 2016

In Conversation With... James P. Bagian, MD, PE | AHRQ Patient Safety Network

In Conversation With... James P. Bagian, MD, PE | AHRQ Patient Safety Network
PSNet 2015

Perspectives on Safety—Root Cause Analysis: What Have We Learned?

This month's interview features James P. Bagian, MD, PE, Director of the Center for Healthcare Engineering and Patient Safety at the University of Michigan, and a former astronaut. He co-chaired the team that produced the influential NPSF report entitled, RCA2: Improving Root Cause Analyses and Actions to Prevent Harm

PSNet: Patient Safety Network

  • Perspectives on Safety
  •  
  • Published December 2016

In Conversation With... James P. Bagian, MD, PE


Interview


Editor's note: Dr. Bagian is Director of the Center for Healthcare Engineering and Patient Safety at the University of Michigan. A former astronaut, he served as the VA's first Chief Patient Safety Officer and the founding director of the VA National Center for Patient Safety from 1999–2010. He also cochaired the team that produced the influential RCA2: Improving Root Cause Analyses and Actions to Prevent Harm report. We spoke with him about the RCA2 report and what we have learned about root cause analysis in health care ... and even whether we should use the term root cause analysis.
Dr. Robert M. Wachter: Given your aviation and astronautics background, when you began thinking about root cause analyses, what did you learn from that experience that you thought would be applicable to health care?
Dr. James P. Bagian: First of all, we wouldn't necessarily ever use the term root cause analysis.But the objective was the same. The way I would look at it is to try to understand what happened, why did it happen, and what do you do to prevent it from happening in the future. Now we could argue that's what root cause analysis wants to do, but it doesn't always work out that way. Seldom is there just one contributing factor or cause to a problem. Usually there are many. The question is: which causes do you end up finally addressing or trying to mitigate. The other thing is that in aviation and engineering in general, the symptoms are what we see. Treating the symptoms doesn't get to the cause. Looking at the causative factors might have the best leverage to get the best long-term benefit.
RW: What do you think the lessons of the last 15 or so years have been as we've tried to apply this technique in health care?
JB: Certainly people were very enthusiastic to embrace it, but there really wasn't any codified way to say what are the qualities and properties of a good root cause analysis. I really don't like that term. It's not just the analysis but also the actions that come out of them. How do you formulate them? How do you make sure that you can do them? How do you make sure that they are done? How do you make sure that you measure the effectiveness? That wasn't really done. I mean some places codified it very well. At the VA, we were rigorous about how we went about it and what it was and what it wasn't. When we first started, I remember going to The Joint Commission and the term used in health care was already root cause analysis. And we didn't even want to call it that because we thought it was a bad term.
RW: What did you want to call it?
JB: W cubed: What happened? Why did it happen? What are we going to do to prevent it in the future? We looked at it from that standpoint because (as we say in the RCA2 white paper—but I've been saying this for years), first of all, root cause implies that there is one cause. Obviously there are many causes. And it only says analysis, and this is not just about doing an analysis. It's about taking action to mitigate the risk in the future. So while analysis is certainly important to understand it, if you don't take action then why did you even bother to begin with?
Some of the studies that have been published say root cause analysis didn't work or the utility seemed low. But these papers vary dramatically in how the components were performed. So while people use the term root cause analysis as if it is one specific process, in fact the way one organization uses the RCA term seldom describes the process that another organization terms an RCA. We're using a term that is poorly defined and not uniformly applied. And in general they come out with very simplistic solutions, if anything at all.
RW: Take us through some of the key recommendations. You already emphasized the need for an action plan. What other elements have you come to believe are important to make this a higher yield activity?
JB: We reviewed virtually any paper that has been written since the late 1990s and even before. We found that many places struggle to prioritize what to look at when they learn of issues through reporting or surveillance. Most places do not have an explicit, concrete process by which they prioritize whether an event should be investigated or what kind of actions should be taken. Most have a group that meets once a month or once a week, but it's very dependent on the personnel in the room and many other things, so they're not uniform and consistent in how they prioritize.
To address this, we talk about using risk-based prioritization. Very few institutions ever look at close calls or near misses. Even in the rare situation that they do look at them, they virtually never have an explicit, transparent, risk-based way to decide if they warrant further examination or require taking corrective action. I give this as a background because some places use harm-based criteria. If no one is harmed, then you don't even get on the scoreboard. So if you're a harm-based institution, and most places are, that means you don't look in any methodical way at close calls or near misses, which means you're giving up the opportunity to learn before you actually injure a patient. In high reliability organizations like aviation and nuclear power, they rely heavily on looking at close calls and near misses because they realize that's a way to identify vulnerabilities and thoughtfully decide whether to mitigate them. If you just wait until you hurt people, which is how health care generally works, then you're saying we have to injure somebody before we even get interested. So, the first recommendation is that the analyses need to be risk, not harm, based.
Next, you can think about reports as the fuel for the improvement fire. You have to educate people that, when they report in the safety system, they will be held harmless. That doesn't mean people get a free pass. But it does mean you articulate this in a way that people understand. In the VA, that was done with stakeholders such as the press, the Congressional oversight committees, The Joint Commission, the unions, and the various professional groups of nurses and physicians. All these groups unanimously agreed on a definition that we called a blameworthy act. If it was blameworthy, the case would be placed on an administrative route where the facts of the event have to be "rediscovered" by the administrative system, which could culminate in punitive action. But if it was not blameworthy, under no circumstances would there ever be punitive action. This strategy has been carried out in many places for many years, with well over a million reports, and there are no allegations of people experiencing ill effects. So we think that's another prerequisite.
RW: This idea is that if the case is blameworthy, it goes through a different process. But I imagine there are times where you don't figure out whether it's blameworthy until you're doing the deeper analysis.
JB: That's exactly right. The way we define blameworthy events is: if the act was thought to be a criminal act, if the act was thought to be done by the caregiver under the influence of alcohol or illicit drugs, or if it was intentionally unsafe. New Zealanders have adopted this same approach that we did at the VA and use the term deliberately unsafe. That's probably a better choice of words. The point being that if you knew it was unsafe but you did it anyway, then that is suspect and needs to go through the administrative route. You notice we don't include anywhere in the criteria whether you broke a rule or didn't follow a rule or if the event was thought to have resulted from a human error. None of those factors are criteria that are relevant in determining if the event was one that is considered blameworthy. Just because you didn't follow the rules doesn't mean you did the something wrong from a patient care perspective. This determination of whether an act was intentionally unsafe, which is blameworthy, can occur at any time from the initial receipt of the event report through the entire investigation process, but once it is thought to be blameworthy it enters the administrative process and leaves the safety process to preserve the nonpunitive nature of the safety system.
There are certainly many rules and guidelines that, while generally applicable and worth being followed, in certain specific instances are not. We wanted practitioners to understand that your goal is to deliver good care to the patient and—if you think a rule is incorrect—then you should try to deal with that in the immediate time frame, talk to your superior, to your colleagues, or whatever, but then ask yourself: Is it better to follow the rule and know you followed the rule but put the patient at higher risk, or violate the rule for the benefit of the patient? If you decide on the latter, then that's what you should do. Then report it immediately, and have it be sorted out at that time. While that sounds kind of squishy, I can tell you that, after literally more than a million submissions, it's not that difficult a thing to do.
RW: Everything you've talked about makes a ton of sense. Of course you need to take the time to deeply dissect what was going on, not just find one root cause but all the underlying causes. You have to build in a follow-up process and an action plan, and you need to have the right people around the table. A lot of organizations are motivated to try to decrease errors and harm. So why do so few places get this right?
JB: A lot of it is common sense. But you cannot just cherry-pick the contributing factor that happens to appeal to you or the one that's easy. If you don't do them all, the chance of having sustainable good outcomes is pretty minimal. When many institutions go to do this, the people involved don't understand that you really have to do this. This is real work. You have to allow time to do it. And you cannot do it sort of like this.
For example, if you have pediatric patient die, and that makes the front page of the local paper, it certainly is a very emotionally tinged and tragic event. You'll hear people say, "Who are you going to hold responsible? Who's going to be fired?" Does management have the courage to look a reporter in the eye and say, "First we need to understand what happened. Our goal is to make sure that this cannot or is very unlikely to happen in the future and whether there's actually going to be individual punishment for those involved. We'll have to look into that to see." We seldom hear comments like these. Rather, we often see that leaders feel like it's necessary to throw somebody to the wolves to show that they're taking this seriously. Sometimes the public or the press may be unsophisticated, and instead of having the attention span to look and see what ultimately was done, had they really reduced the risk over time, they just lose interest and mainly ask, "How many scalps did we get?"
RW: My own bias is I think the culture is pretty good in terms of the blame and nonblame in the system, at least at UCSF. And yet we haven't quite figured this out. It feels more like a resource issue: how do we find the time and energy to do all of this right? How do you deal with a large volume of cases and have all these groups running around doing their thing and coming up with solutions that have a high chance of sticking, and then going back a year later and making sure that they worked and stuck. It feels like we tend to take shortcuts because of the lack of time and money.
JB: I agree with a lot of what you just said. But we have demonstrated that the business case is quite clear. If you have a well thought out prioritization scheme, you're quickly limiting the number of cases to a manageable volume. If you don't have that, then you're overwhelmed and don't know what to do. On the other end, it's important to come up with sound action plans and to measure success rigorously. At the VA, and we recommend this in this white paper as well, we said that the recommendations should be signed off by very senior leadership. So that would be another indirect way of bringing in the business case.
In the beginning, this was strongly resisted at the VA. Within a year of analyzing cases and implementing corrective actions using this method, there was a huge change in heart among our senior leaders, who began to say that this really is important. It improved our ability to lead. Many said if you're not doing this, you're missing the boat. In fact, in the VA, our staunchest adversaries became our most avid supporters. Unfortunately, in today's health care industry there are many places where that doesn't happen. The CEO is disengaged, the senior manager is disengaged, and the boards are disengaged. They don't know what goes on. They don't know what gets approved. They don't know why it gets disapproved. And they don't follow up. Really, part of this is culture.
RW: Yeah, and leadership.
JB: And leadership. I can tell you because we've seen it. When the rank and file see that when they identify a problem, they work on it, and it gets fixed and stays fixed, they're more likely to bring up problems in the future because they know they'll get fixed. Instead of a typical concern when you talk to people and they say, "Well they won't let us do it." And my question is who is they? Look in the mirror. When staff are engaged and think they can make a difference, they feel like world-beaters. It's not a political thing. Write down the risk matrix that you use to prioritize. Tell the whole world, the press, the patients, and the community; then you just have to live by it. It sounds daunting, but when you do it, it makes things so easy. Unfortunately, very few places do that. That's a problem. They're often afraid to do it because they don't want to be accountable for doing what they said they'd do.
RW: Any other things you wanted to talk about?
JB: People could argue and say that the RCA2 process looks like a lot of what the VA does and did. A lot of the papers came from our work there. But we had a whole host of different people involved in this from different institutions like Kaiser, MedStar, other countries, and from patient advocacy groups, risk management groups, and others. Others have used these systems and seen success. Sweden has been using these same systems for a while. So have New Zealand, Australia, Denmark, and the NHS in the UK. So it's not like this is just a theoretical construct that's never been employed with reasonable success.
That's why we introduced this term RCA2 in this white paper. Everybody's used to saying RCA even though it might not be the right term. So let's call it RCA2 because the action part is key. Get that in people's minds and hope that people would look at this and try it. The Joint Commission has also been firmly behind this. RCA2 is not the only way, but it's a way that they think would be successful. So now that the RCA2 methodology is written down, people can look at it as a bundle and employ it that way. If they use it that way, I think we would see greater success.







View More

No hay comentarios: