All incentives should point in the same direction. In the end, culture will trump rules, standards and control strategies every single time, and achieving a vastly safer health service will depend far more on major cultural change than on a new regulatory regime – Don Berwick 2013
Healthcare is not one single system but a set of systems ranging from the ultra-safe to ultra-adaptive (Vincent and Amalberti 2016). Neither is it one culture – but that is for another chapter.
The system design helps or hinders the behaviour and performance of the individuals that work within it; the clinicians, the managers, the administrators and so on (Dekker and Leveson 2014). There are so many variables within healthcare it is often a surprise that so many thing go right.
- Variation in training, individual experience, fatigue, boredom, depression and burnout
- The diversity of patients – variation in age, severity of illness and co-morbidities
- Working hours – types of shift patterns, length of working hours
- Complicated technologies – equipment, tasks and procedures
- Drugs – calculations, drug names that look and sound alike, prescriptions, computerized prescribing and alert systems
- Design – poor packaging, poor labelling, design of equipment, design of environments (and age of buildings)
- Length of stay – prolonged or multiple transfers
- Poor communication, handover, transitions and information
Safety experts are more and more agreeing with the work of Rasmussen who back in 1990 said that a safer system is one that is able to adapt, able to help the people that works within it adapt to their environment or changes to their environment. The goal is not to limit human behaviour into a set of tasks but to design a system in which a human can function safely and effectively while at the same time being able to use their judgment, and ingenuity.
This chapter introduces the concepts such as risk resilience and resilience engineering, Safety I and Safety II. These aim to create a system that continually revises its approach to work in an effort to prevent or minimise failure, be constantly aware of the potential for failure and help people make decisions and direct (the limited) resources to minimise the risk of harm, knowing the system is compromised because it includes sometimes faulty equipment, imperfect processes, and fallible human beings. Hollnagel and colleagues describe three key elements as part of resilience:
- foresight or the ability to predict something bad happening
- coping or the ability to prevent something bad becoming worse
- recovery or the ability to recover from something bad once it has happened
More recently the field of patient safety is moving from the (potentially) simplistic approach of capturing incidents and looking at when we get it wrong to learning from when we get it right. It is innovative in that it looks proactively rather than reactively and does not assume that erratic people degrade an otherwise safe system. The work of Adrian and Emma Plunkett – Learning from Excellence is an outstanding example of putting these theories into practice.
James Reason talks about humans as heroes – resilience experts align with this and describe a “new view of human error” which sees humans in a system as a primary source of resilience in creating safety. Eric Hollnagel describes this in detail in his book that sets out the concepts of Safety I and Safety II (Hollnagel 2014). He sets out the current state as Safety I defined as a state where as few things as possible go wrong and how we seek to improve safety by measuring and then trying to reduce the number of things (systems and humans) that go wrong.
This approach is stretched to or beyond breaking point – and the world has changed
In Safety I the basic assumption is that the system is safe and people make it unsafe. It generally goes like this:
- Something goes wrong – the default is to look at what the policy says should have happened
- Most investigations usually start with an assumption that the ‘operator’ must have failed
- Regulators, legal and clinical negligence systems and human resource use this approach and anyone that deviated from the ‘normal’ policy are found to be at fault
- We are fixated on clear and simple explanations of what ‘should be done’ – we try to prescribe tasks and actions in every detail
- When there is a difference between the policy and practice this is used to explain why things went wrong
- Everyone actually doing the work knows it is only possible to work by continually adjusting what they do in a given situation
In Safety II we are much more proactive and interested in how things go right, how people are continuously trying to anticipate the potential for harm. Instead of humans being seen as a liability, in Safety II the systems are considered unsafe and the people create safety; humans are seen as a necessary component of system flexibility and resilience.
Performance adjustments and variability is at the core of healthcare. This is a positive thing but is interpreted negatively as…deviations, violations, and non-compliance.
However, the reason that things go right most of the time is not because people behave as they are told to, but that people can adjust their work so that it matches the conditions. So…
Value the humans in the system and treasure adaptability
- People adapt and adjust to actual demand and change their performance accordingly
- People interpret policies and procedures and apply them to match conditions
- People can detect and correct when something goes wrong or when it is about to go wrong and intervene to prevent
In Safety II the pursuit is one of building a system where everything goes right. It is also the ability to succeed under unexpected and unexpected conditions so that the number of intended actions and acceptable outcomes are as high as possible.
This leads to the patient safety reviewer to find out how or why things go right and to ask how we can see what goes right. Questions are the answer:
- Do you ever adjust the activity to the situation?
- How do you determine which way to proceed?
- What do you do if something unexpected happens? e.g. an interruption, a new urgent task, a change of conditions?
Hollnagel goes on to say that adopting a Safety II perspective does not mean that everything must be done differently or that current methods must be replaced wholesale. His recommendation is that we look at what is being done in a different way. It is still necessary to investigate things that go wrong and it is still necessary to consider the risks but that when doing an investigation adopt a different mind-set.
I also talk about human factors, standardisation and the way in which we have for far too long on problems in isolation, one harm at a time. Across the UK there are people who concentrate on falls, pressure ulcers, sepsis, acute kidney injury, or VTE ….. the list goes on. These individuals are passionate about their particular area, they can be lone individuals or groups, either inside or outside the organisation. They ‘lobby’ people for attention. What these individuals tend to do is focus others around them on reducing or eliminating specific harms through a project-by-project approach. People are confused, they don’t know which ‘interest’ or area of harm deserves more or less effort, time and resource. While there is a need for these passionate people focusing on their particular cause we also need to ensure that everything is joined up. Stopping them from becoming competing priorities and maximising their successes by collaborating.
Patient safety requires a concentration on the factors that thread throughout all of these individual areas of harm; a common set of factors. For example, communication, patient identification, patient observations, the sharing of information with each other and patients, the deficiencies in teamwork and team culture, and the way we design the system and care pathway. These are the same cross cutting factors that happen time and time again and which have been found to explain the majority of the reasons why harm happens.
The concepts described in this chapter move us away from the predominant approach of the last two decades which has led us to look at safety as isolated incidents, to not consider the ways in which we get it right most of the time and try to replicate that and to not join up the disparate patient safety projects.
To the patient it is never just one incident or a nice neat line of activity and there is often hope amongst the harm. We need to address patient safety in the same way.
Rasmussen J (1990) Human error and the problem of causality in analysis of accidents, In Human Factors in Hazardous Sitations ed DE Boradbent, J Reason and A Baddleley 1-12 Oxford: Clarendon Press
Hollnagel E, Woods, Leveson NG. (2006) Resilience engineering. Abingdon: Ashgate
Hollnagel E (2014) Safety-I and safety-II: the past and future of safety management. Ashgate Publishing, Ltd, Farnham, England.
Reason J (2015) A life in error Farnham: Ashgate
Vincent C, Amalberti R (2016) Safer Healthcare: Strategies for the Real World Springer Open