In reality we do not know what the true quantitive level of safety is. We dont know for sure how many things are going wrong and we definitely dont know how many things go right. I would argue we may never know. We cannot capture everything – that would be impossible. What we do capture is biased in that it is often the easy to see, and the easy to report incidents.
Researchers have tried for decades to estimate the level of harm in healthcare and in the UK there is one statistic that seems to have stuck around. The 10% stat. This arose from the work of Charles Vincent and his colleagues who did a retrospective record review of acute care. Now for many reasons it is a flawed statistic but if we run with this and all other statistics which across the globe range from 3% to 20%… if 10% of our patients are being harmed by the care being provide to them, what is happening in the 90%?
If we looked at the 90% we would find functioning systems, not perfect, but functioning nonetheless. If we look we would find people adjusting and adapting depending upon the circumstances they face, we would find people working around processes that are not ideal or being creative and innovative to solve a problem. If we look we will find people simple getting through their day.
If we define safety as care going well then surely this information resides in the 90% and surely, we would want to know more about this. As Erik Hollnagel would say, what is happening when nothing bad is happening?
However, in our current safety world, we focus on the 10% as a way of understanding whether a system is safe or not, which means we are only actually understanding the system when it fails. The failure is a small portion of what is going on. This is perpetuates unequal learning and makes an assumption that learning from failure is better than, more important than, learning from our day-to-day work and even our successes.
If we instead try to understand the 10% and the 90%, we can put the failure into perspective, we can perhaps truly understand how safe our systems, processes and practices are. This has been coined by Hollnagel and colleagues as Safety II
Safety-II is a different interpretation of what it means to be safe. In fact, Erik Hollnagel (2014) would say that the word safety is not actually about safety at all, that it is about ensuring an organisation, department, unit or team functions as intended. It isn’t a new initiative or product for people to implement, but much more about a different way of looking at safety using a different mindset, to move beyond the traditional focus on failure and to consider wider issues of systems and how they function.
There are a number of ways we can start to operationalise safety II. There are just five ways we might to this:
First we can incorporate the thinking into our current ‘Safety I’ thinking. Of course it is still vital that we understand failure and collect data such as incidents and we should investigate these to see if we can find ways to prevent things from happening again. But instead of seeing the failure in isolation we should ask ‘if it failed this time, what normally happens’ or ‘if it normally goes ok, then why did it fail this time’. It is vital that we consider how the system or process or task normally goes in order to detect the differences and the factors that led to an unwanted outcome. This also helps us to decide whether we make changes or not. Recommendations to change things based purely on failure are at risk of impacting on how something normally goes ok or succeeds and could in fact make things worse.
Second we could study what people do everyday, their ‘work-as-done’; how things unfold as they should, as they are expected or planned …. together with when the unexpected or unplanned happens. Safety II is a way to understand the realities of our everyday work in a constructive and positive way. Safety II thinking explicitly assumes that systems work because people are able to adapt and adjust what they do to match the conditions they face. Currently, the safety-I view does not take into account that human performance practically always goes right and things go right because people adjust what they do to match the conditions of work.
Healthcare staff are able to detect and correct when something goes wrong or when it is about to go wrong and intervene before the situation becomes seriously worsened. The result of all this is performance variability, not in the negative sense where variability is seen as a deviation from some norm or standard, but in the positive sense that variability represents the adjustments that are the basis for safety. However, our current approach attempts to reduce errors or incidents and there is an attempt to standardise and constrain variability. This is through training, guidelines, policies, procedures, rules and regulations as well as supervision and standardised processes, forcing healthcare staff to stick to the rules. This interestingly can lead to unproductive or even unsafe care.
Third, we could become more proactive and preventative. We have always had a tool to help us do this but it has received a lot of bad press, mostly because it is deemed dull and boring and feels like a thing we have to do before we can move on to the interesting stuff. This is risk management. Helping understand and reduce the risks, to understand and work with the hazards we face and minimise their effects.
It feels like such a fundamental aspect of safety to me and yet it has often been reduced to a quick half an hour conversation to make sure we complete the risk register or comply with an expectation that we have considered these things. For me it is a crucial first step before ordering a new piece of equipment, before designing a new system or even a healthcare facility. Risk management is a crucial part of safety II.
Fourth we can understand risk resilience more. Risk resilience is the capacity to prevent minor mishaps from getting worse or a minor incident becoming a serious one. The methods are focused on how safety can be maintained and an understanding of what is acceptable risk and unacceptable risk. A resilient system is one that continually revises its approach to work in an effort to prevent or minimise failure. It is about being constantly aware of the potential for failure and helping people make decisions knowing the system is already compromised because it includes sometimes faulty equipment, imperfect processes, and fallible human beings. Hollnagel and his risk resilience colleagues (2015) describe three key elements as part of resilience:
- foresight or the ability to predict something bad happening
- coping or the ability to prevent something bad becoming worse
- recovery or the ability to recover from something bad once it has happened
Resilience offers a proactive and positive system-based approach, allowing people to understand both what sustains and what erodes the ability to adapt to changing pressures. People learn how to stay “safe,” rather than focusing on error as an end in itself. The humans in the system are a primary source of resilience in creating safety. Conklin (2020) outlines four components of risk resilience:
- Fixate on where the next failure will happen. Don’t be surprised by failure. Constantly look for areas that are confusing, risky or under high pressure. We cannot predict the next incident, but we can predict environments where events and failures are more likely to happen.
- Constantly strive to reduce complexity. Ask what would make work easier to do?
- Understand what the processes are serving. Are we trying to improve the operational aspects and governance of the system or the outcome of care? As time goes by rules and policies drift towards maintaining compliance with governance, not about achieving a good outcome.
Respond to low level signals seriously. Go out there, and fix the problem and respond to events purposefully. Don’t go out to fix the individual, don’t enact immediate policy and rule change. Slow down and learn. The only way that change can ever happen, the only way incidents are prevented, is through learning.
Fifth we can focus much more on behaviours and relationships. Safety-II is also about the way we interact with each other and the way we behave as much as the technical aspects of what we do. It emphasises mutual respect, a non-tolerance of disrespectful behaviours and enables people to feel safe to participate and speak up. You cannot have safety II if you don’t have a restorative just culture, psychological safety and an understanding of team behaviours.
In conclusion we can embed safety II thinking into all that we do.
The aim of safety II is to create high-reliability, and resilient health care systems. Understanding interactions among humans and other elements of a system; on design of equipment, environments, communication, handling of emergencies, simulation, teamwork.
Using safety II we can help support human performance, effectiveness, system design and safety in health care. We can use it to study the whole system; the processes and activities that are disconnected or isolated the interconnections and relationships, the silos and the things that link up the silos. As healthcare continues to change rapidly and become more complex, systems thinking is vital to help us manage, adapt, and consider the choices we have before us. It is also vital as part of any investigation that is required following an incident.
For too long in safety we have focused on problems in isolation, one harm at a time. Across the UK there are projects and people who focus on falls, pressure ulcers, sepsis, acute kidney injury, or VTE, the list goes on. These individuals and teams are passionate about their particular area. They ‘lobby’ people for attention, bring people together to help reduce or eliminate specific harms through a project-by-project approach. What this approach can do is create competing priorities. It can confuse those at the frontline, as they don’t know which ‘interest’ or area of harm deserves more or less effort, time and resource.
Safety II would steer us away from this silo approach to a more holistic, systematic approach. Working on the factors or variables that are common to or thread throughout all of the individual areas of harm; a common set of causal or contributory factors. The cross-cutting factors happen time and time again. It helps us understand the system and design solutions that fit for the different systems at place. It helps us design solutions that change the system not the people.