The Implementing Patient Safety blog series
Part one describes the growing sense of unease about the way we do safety in healthcare and how we can do it differently. It describes the dominant approach to patient safety in healthcare we use today – which has been coined by some as Safety I.
Patient safety is stuck in a world of bureaucracy, negativity and blame. The tone and language of patient safety have led to disengagement especially of clinicians. The methods we use to understand why things do not go as planned are fit for systems that are simple or complicated and that lend themselves to linear causality models such as root cause analysis or the five whys.
All of the evidence so far particularly in relation to impact or improvement necessarily calls into question the prevailing ways in which patient safety has been addressed to date. Are we actually making any difference, are we actually learning about how we can make care safer? All of which is not helped by a number of safety myths that are getting in the way of progress.
Thankfully the way we think about safety is changing – Erik Hollnagel, Charles Vincent, Rene Amalberti and Jeffrey Braithwaite are just a few of the wonderful people who are helping us think about what we could do differently.
We need to shift our approach to safety from focusing purely on failure to studying how things happen on a daily basis, how they typically go right (safety II), and how this needs to be balanced with the learning from failure (safety I). We need to understand complexity science and complex adaptive systems. We need to implement a just culture that supports people to speak out, to feel ‘psychologically safe’ to question and to challenge without fear of repercussions and we need to urgently address the culture of incivility and bullying.
So what are a few of those safety myths?
10% of patients in healthcare are or will be harmed by the care we provide – actually in truth we don’t really know the full extent of harm; we don’t know the actual percentage of harm or things that ‘go wrong’ because it is impossible to measure. We don’t capture everything that goes wrong and we don’t capture things that go right so we cannot have a percentage of things that go wrong over things that go right or vice versa. We don’t have that because currently it is impossible to collect.
Incident reporting systems will accurately represent the safety of an organisation – incident reports are merely indictors of what is happening in an organisation. They are brief triggers for further inquiry and can never be anything more than that. They capture the easy to report, and are mainly the reports submitted by one profession (nurses). They are also used to capture organisational and operational issues and can sadly be used to report issues of grievance from one individual to another or as a threat. Today’s incident reporting systems are a never-ending pursuit of rising reports. We count the number of failures and aim for a reduction in the failures while at the same time aim for an increase in the number of reports of failure. This has led to a huge industry and we are now drowning in data and one could ask at what point we stop.
The pressure on organisations to increase reporting means that they capture reports just for the sake of increasing the numbers. These systems are being ‘gamed’ with people learning how to make their systems look like the reporting of incidents is healthy. Everyone aims to be in the middle of any league table, it is the place of least interest to those that scrutinise reporting behaviours. What we have created is a culture of mediocrity.
Incident reports can be used to prioritise solutions and activity – the problem with incident reporting – apart from the fact that they will never show the true numbers and types of event or harm or the lack of learning – is that it leads to a way of prioritising activity. Organisations set up groups and hold meetings to tackle these events or harms, assign people with roles to concentrate on reducing the number of things reported and an industry of short term projects. Very rarely are these isolated harms looked in combination or studied for the cross cutting factors that thread through all of them.
Root cause analysis is the right method to find out exactly what happened – As humans we like to find neat answers. There is a belief that when something goes wrong there must be ‘a’ cause and we assume we will find the preceding cause. Everyone likes a cause, even better if it is a single cause. In fact, there are very few things that can be deemed a preventable root cause, and very few things that can be addressed so that things will never happen again in the future. This is because, as we shall see later in this blog series, systems are complex and adapt all of the time, outcomes emerge as a result of a complex network of contributory interactions and decisions and not as a result of a single causal factor or two. Incidents are disordered and there is no such thing as find, analyse and fix. As Steve Shorrock (see his excellent blog here ) says it is important to note also that given the adaptive nature of complex systems, the system after an incident is not the same as the system before it, many things will have changed, not only as a result of the outcome but as a result of the passing of time.
There are theoretical and practical consequences of root cause analysis on day-to-day operations, strategic management and planning, safety culture, and organisational safety (Hollnagel 2013). Hollnagel’s view is that simple linear accident models were appropriate for the work environments of the 1920s (when they were first conceived) but not for the current work environments. Even Professor James Reason the ‘inventor’ of the Swiss Cheese Model of investigation and accident causation argues that they have their limitations.
There are many times when I have investigated an incident and the cause has been elusive. Even something as significant as administering the wrong drug to the wrong patient it was hard to truly understand why that happened. This is because while the outcome is clear the same is not the case for the actions that led to the outcome. In healthcare in particular the actions are likely to be due to transient conditions, literally things present in one time only at that particular place. Those same set of conditions may not actually happen ever again. This means that we cannot fix them in the same way as you would a linear process or technical fault. We may not be able to control every condition that happens. The only thing we can do is to minimise the error producing conditions in some way.
Suzette
Shorrock S via https://humanisticsystems.com/2016/12/05/the-varieties-of-human-work/
Hollnagel E (2013) Is safety a subject for science? Safety Science; Elsevier Ltd http://dx.doi.org/10.1016/j.ssci.2013.07.025
Hollnagel, E, Braithwaite, J, Wears, R L (2013) Resilient Health Care. Ashgate Publishing Limited Surrey England