In safety today there is a view that error is somehow preventable and that when people make mistakes all we need to do is tell them to stop making mistakes and possibly sanction them if they do. However, I know it is an obvious statement, but not everything we do will go right.
Imagine that you are in the midst of an intensive care unit surrounded by pumps, wires and machines constantly flickering numbers and lights and you need to administer one drug in the patient’s vein and one drug in the patient’s nasogastric tube. By mistake you put the venous drug into the nasogastric tube and the nasogastric drug into the vein.
Why did you do this? Maybe you were distracted, maybe you were thinking of the next thing you were going to do, maybe your brain said I need to put one of these in the vein and one of these in the nasogastric tube and I will put the first one I have in my hand in the vein even though it is the one to sooth the stomach. Maybe the equipment allowed you to do so, it wasn’t easy to see which syringe was for which and both fluids were the same amount and same colour.
When we realise what happened the people who are ‘judging’ are horrified. How could that happen? Was the person not paying proper attention, did they not care about what they were doing? This person is clearly incompetent or lazy. We have to stop this person doing this again and at the very least punish them.
As said in previous blog posts, because of the consequences in healthcare of mistakes and errors, we do need to try to figure out what happened in order to help design ways to prevent it from happening again. Solutions that help people get things right, such as the labelling or packaging of the drugs, or the connections between the syringes and the different tubing, or having someone checking what you do. At the same time, we still need to accept that this is part of life and work, part of being a human being who works in healthcare and that we have made many mistakes before and will make many mistakes again in the future. They key is to therefore study the work that we do in order to understand it better, in order to understand the early warning signs that might be happening in our system.
The words ‘human error’ makes a particular judgement. Human error clearly sets out that it is the human who is the cause of the problem and responsible for the outcome. In the main people use the term ‘with good intent’ to help people understand that human error is normal i.e., we all make mistakes. But it implies that any failure, causal or contributory, is the fault of the human.
In theory the term human error relates to how human performance of a specific function might fail to reach its objectives rather than whether the human failed but in practice the term misleads people to focus on error of the human. It implies that humans can be also fixed in some way; that the error is in some way controllable or a choice. It points to the individual rather than the system in which they work. By simplifying this as a cause then the solution to this is to stop the human from making the errors either by stopping them continue or to restrict them in some way. However, we know that incorrect human actions at the frontline are due to a deeper set of symptoms within the system or the workplace.
Human error also stigmatises actions that could have been the right actions in slightly different circumstances. There is a fine line between the right and wrong actions which is often only determined when there is an end result or a known outcome.
Human error is too often used to describe carelessness, laziness or incompetence and is highly subject to outcome bias.
What if we don’t use the term human error at all?
Preferred terms are error on its own or performance variability, or erroneous conditions or system error. If all ‘human activity is variable in that it is adjusted to the conditions’ then the variability is a strength, indeed it is a necessity rather than a liability. As many say, failure is the flip side of success. By acknowledging that ‘performance always varies and never is flawless, the need of a separate category for human error evaporates’ (Hollnagel 2014).
This view that we can prevent errors is perpetuated by terms such as zero harm, zero events or never events. The problem with seeking zero harm or never events is that they are impossible. There is a belief that if we count all the failures and we find all the causes of those failures and treat them that accidents and incidents are therefore preventable, this is termed the ‘zero harm principle’. Zero harm is an attractive goal but unlikely to be achieved in any foreseeable timescale, if ever. The rhetoric of safety is one where there is a world in which no one is harmed in healthcare.
However, this view is also perpetuated from organisations such as the World Health Organisation, the Patient Safety Movement in the US and national organisations within the UK. Within the WHO 10-year strategy there are 35 specific strategies. Number one is the make zero avoidable harm to patients a state of mind and a rule of engagement in the planning and delivery of health care everywhere.
Even the WHO recognises that this is controversial, that opinions across global health about the wisdom of setting this kind of central or overarching goal are mixed. They describe it as a compelling vision. However, setting an unreachable goal is demoralising and demotivating and will not attract clinicians to its cause. The reason for the use of this goal, the WHO state, is that the narrative over the last 20 years hasn’t worked so surely this is the direction that needs to be taken. The WHO claim that a reduction in the currently unacceptable levels of avoidable harm is entirely within reach. I would remind the reader that we don’t actually know the true level of avoidable harm. We don’t know if it is more that 10%, or is around 10% or less.
The actions they suggest in order to achieve zero harm is to recognise that safety is a priority, to have public commitment towards zero avoidable harm, to establish national safety programmes, map the existing policy and strategy landscapes related to themes such as surgical safety, medication safety, blood safety and so on. Perpetuating the myth that safety is all about individual harms and also continuing the focus on acute care and not the continuum of care that patients experience. Additionally, they state that member states should amongst other things, create national patient safety charters, participate in World Patient Safety Day and adapt WHO patient safety guidance, allocate adequate resources for patient safety implementation and create minimum safety standards.
Improvers like to use ‘stretch goals’ and in this respect they would probably say that aiming for zero harm is a stretch goal and that there is nothing wrong with having this dream or aspiration.
But we have to accept that a system can never be ‘safe’ it can only be as safe as possible. Healthcare is never about certainty; it is about the balance of probabilities and risk. It is filled with people who will make mistakes no matter how hard they try to be perfect. They are often working in systems that are not well designed or not designed to help people work safely and in conditions that increase the chances of things going wrong. If we tell them that we should aim for zero harm then every time things don’t go as planned then they will feel like they are letting everyone down and that they have failed within an expectation that they should be perfect. We must not perpetuate the myth of zero harm because it assumes that all accidents or incidents have causes and if they can be identified it should therefore follow that they can be prevented and therefore they can be reduced to zero. This is an impossibility.