Re(think) incident reporting

Snippets from an article published on 27 July 2015 in BMJ Qual Saf doi:10.1136/bmjqs-2015-004405

Patient safety reporting: a qualitative study of thoughts and perceptions of experts 15 years after ‘To Err is Human’.  Thanks to:

Over the last 15 years, literally millions of incident reports have been submitted around the world. In the UK alone, 1.5 million reports are submitted each year to their National Reporting and Learning System. However, as yet, there is little evidence that incident reporting is associated with delivering safer healthcare. During this time, clinicians, hospital administrators and accrediting bodies have gleaned useful insights about the challenges of incident reporting. For example, incident reports detect only a small percentage of relevant patient safety issues as most reports concern mundane events. Additionally, very few doctors submit adverse event reports. Instead, incident reporting is largely undertaken by nurses, and incident reporting systems largely fall under their governance within healthcare organisations. Doctors’ reluctance to report incidents appears to be multifactorial, and includes time constraints, medicolegal fears, a lack of clarity about what to report and a paucity of feedback regarding previously submitted reports. Other limitations include the general lack of disseminating information derived from incident reports and difficulties in using the data once disseminated.  A significant issue raised was the lack of visible action as a consequence of reporting an adverse event, and was considered to be one of the major weaknesses of incident reports. The failure to see visible action was believed to have a strong negative influence on the commitment by front-line workers, particularly doctors, to reporting adverse events.

Potential areas of improvement

Overall governance of reporting:  many reporting systems are under-resourced and paralysed with overwhelming amounts of data and little visible action. Healthcare organisations need to bring clarity to the roles and responsibilities for incident analysis and ownership of the related actions to allow incident reporting systems to carry out their roles adequately.

Improve medical engagement:  Although there is no shortage of reports being submitted, reports are rarely submitted by doctors, limiting the types of events reported and thereby learnings for the organisation.

Reduce incident report volume:  The large volume of reports lead to a wide variation in the quality of the reports from the large number of reporters, thereby minimising their usefulness, the timeliness of analysis and ability to prioritise the analyses.

Institute-focused reporting:  At the hospital level, reporting can be proactively targeted towards specific types of adverse events, new systems or services.

Establish processes for low-frequency event reporting:  There need to be opportunities to encourage the reporting of low-frequency, but serious or potentially serious, adverse events. This allows one-off events that have not been identified a priori to be thoroughly investigated through a root cause analysis.

Robust triage processing:  One proposed and potentially helpful way to address the volume of reports would be to categorise them and separate mundane events that are deviations from the normal reliable processes and those events that are reliant on the individual or team or organisation to respond to a new or unexpected demand, reflecting the organisation’s resilience. Those events deemed high risk could then be singled out so that a meaningful root cause analysis by skilled staff can be undertaken to ensure appropriate learnings can be made.

Learning from reports:  The next generation of reporting systems must be visibly linked to effective action that can be easily monitored and evaluated to provide the evidence that doctors need in order to engage.

Given the lack of evidence linking incident reporting to improved patient safety we need to re(think) incident reporting.

15 websites for Patient Safety

I promised you a list of the top sites that I always go to for evidence, information, latest news, research etc. these are my favourite:

  1. NHS Scotland, Quality Improvement Hub (Qi hub) via http://www.qihub.scot.nhs.uk [this one has a whole section on quality and efficiency supporting the business case for safety]
  2. Agency for Healthcare Research and Quality (AHRQ) in the US via http://www.ahrq.gov
  3. Implementation Science – and open access on line journal via http://www.implementationscience.com/
  4. Risky Business site for all past talks via http://www.risky-business.com/
  5. BMJ Open – open access online journal via http://bmjopen.bmj.com/
  6. BMJ blogs via http://blogs.bmj.com/quality/
  7. Health Quality and Safety Commission New Zealand via http://www.hqsc.govt.nz/ which also has the Open campaign via http://www.open.hqsc.govt.nz/
  8. The Health Foundation and their patient safety resource centre via http://patientsafety.health.org.uk/
  9. The National Patient Safety Foundation in the US via http://www.npsf.org/
  10. IHI – the institute for healthcare improvement via http://www.ihi.org/
  11. NPSA for archived stuff via http://www.npsa.nhs.uk/
  12. WHO patient safety via http://www.who.int/patientsafety/en/
  13. Leading health Systems Network via http://www.leadinghealthsystemsnetwork.org/
  14. WISH – World Innovation Summit for Health via http://www.wish-qatar.org/
  15. The International Society for Quality in Healthcare (ISQua) via http://www.isqua.org/

Re(think) Patient Safety Series – Blog 3

This is the third in a serious title Re(think) Patient Safety.

The series started by looking at learning, moved on to thinking about the right culture for safety and now concludes with a discussion around implementation and implementing the right solutions for safer care.

Many of us have tried to realise ideas and introduce new methods, but after a while we have been forced to admit that things didn’t turn out as we had originally intended and planned.  We all know how complex and challenging implementation is. How it requires ongoing maintenance.  There is a great new series in Quality and Safety in Healthcare entitled ‘the problem with… Kaveh Shojania and Ken Catchpole will be overseeing.  The series will cover problematic improvement strategies, as well as problems that seem never to go away.  They have started to look at checklists, and will go on to review incident reporting, mortality reviews, falls, hand hygiene and others.  In the editorial they discuss the difficulties of implementation.  They describe the challenges we face which can lead to inadequate implementation:

  • The unanticipated effort or expertise for implementation – problems requiring substantial effort to solve
  • Failure to deliver the expected results
  • Solutions that is wrong for the particular problem
  • Focusing on a problem out of proportion to its importance
  • Failing to appreciate the complexity of a problem.
  • Strategies that sound like solutions but still haven’t been worked out, so are really unsolved problems e.g. change the culture, focus on leadership or teamwork

So, yes implementation is complex and challenging and it requires ongoing maintenance. It is a process not an event; it requires both skilled expertise and concerted effort and investment.

Implementation is a specified set of activities designed to put into practice an activity or program or solution.  It includes adoption, dissemination, diffusion (Everett Rogers, author of the classic book Diffusion of Innovations, defined diffusion as ‘the process in which an innovation is communicated through certain channels over time among the members of a social system’), spread and sustainability.

The implementation process starts with someone having an idea about a new way of doing things that can be used to meet a need or solve a problem. The idea may originate in the organisation where the need arose or outside of the organisation where external view has identified a need and a solution.  The idea is presented and a decision is taken to move ward to dissemination.  This should then lead to adoption which includes the key steps of planning, preparation and application of the solution or activities needed to achieve the sought–after change.  Once the new method has been adopted from both a practical and organisational point of view, it is then adjusted to the local context.  Finally, the method is considered institutionalised, embedded or sustained, if it is taken for granted or is the ‘way we do things’ regardless of reorganisations, personnel turnover and political changes.

Implementation research has identified the common factors or components that have significant bearing on the success or failure of implementation.  These include:

  • The recognition that there is an explicit need for change and the solution and that the proposed method is the right one in the context
  • There are visible benefits
  • The solution is in line with the norms, values and working methods of the individuals, teams or organisation implementing it
  • The solution is easy to use and it can be tested on a small scale
  • It can be adapted to the needs of the recipient and the context
  • Finally, it gives rise to knowledge that can be generalised to other contexts

Implementation research has also identified the tools or methods that usually help implementation:

  • Oral or written information (guidance, how to guides, standards, alerts etc.) is normally offered when a new method or change is to be introduced, however offering only information, or education or practical training is usually not enough in isolation
  • The research has found that an optimum combination of several measures is required, e.g. education and practical training and feedback. This also needs to be supported by continuous high-quality support, time and resources and to involve the users (people subject to the change) at an early stage of the design process
  • If a new method does not lead to the anticipated effects, it should be possible to find out whether it was the solution that was not right for the particular problem or the method of dissemination itself that did not work or whether it was down to unsuccessful implementation factors

Many of you are working on individual aspects of patient safety and harm; falls, pressure ulcers, sepsis, acute kidney injury, VTE ….. the list goes on. We have made changes in these areas that mean that patients are safer but even impressive results appear to be difficult to sustain over time.  There are many individuals who are passionate about their particular area.  For the main disease or harm specific topic in patient safety there are nominated individuals – I will describe them as change agents.  A change agent is an individual, either inside or outside the organisation, who ‘lobbies’ people to adopt and implement new methods.  Change agents can be found in different arenas and occupy different positions and common for all of them, however, is that they have a strong belief in the change they are ‘selling’.  They:

  • Help people become aware of a problem or a need
  • Establish a trusting relationship with the audience
  • Help people see that the problem cannot be dealt with using existing methods
  • Motivate the audience to choose the new solution to deal with the problem or need
  • Provide practical support to implement the solution
  • Support the integration of the new method into daily activities in the long term

However, this can have the risk of creating competition in a way that people dont know which ‘interest’ or area of harm deserves more or less effort, time and resource.  These competing interests create competing prioritisation and confusion.   In fact most incidents are caused by the same common set of causal or contributory factors; for example; the deficiencies in teamwork and team culture, communication or problems like patient identification and observations, the way we share information with each other and patients, and the way we design the system and pathway.  The same cross cutting factors or causal factors that happen time and time again can explain the majority of the reasons why harm happens:

  • No matter what the problem under investigation is – such as wrong-site surgery, hand-hygiene compliance, or patient falls
  • No matter where the problem is – such as hospital, practices, care homes or patient homes
  • No matter who is affected

What should we do differently? What should we re(think) in patient safety?

So we could focus on implementation through the lens of these cross cutting themes rather than on the disease or harm specific issues.  We could focus implementation on those causal factors that come up time and time again.

When you design your improvement plan I urge that you back it up with an equally important implementation plan that supports change over time (recognising that it can take up to 10 years) and do a few things well rather than try to juggle multiple programmes.  I also urge you to think about whether you should stop doing something. If it isnt working you may need to be brave and say so.

Mary Dixon-Woods also argues that we should also consider what can be standardised right across the system, across the NHS and in fact across the globe.  So would argue that sustained change be more likely if:

  • We support your concerted efforts to reduce harm related to specific topics and address the five causal factors with targeted interventions
  • Together with addressing systemic issues at a national and even international level

This unique combination could lead to major improvements across all aspects of safety in healthcare.

At Sign up to Safety, our proposition is that over the last fifteen years in the patient safety field there has been a lot of guidance (alerts, solutions, interventions, standards) for those that work in healthcare to help them ‘make care safer’. However there are three key problem areas:

  • The lack of knowledge as to how this guidance can best be implemented in different areas and settings
  • The length of time it takes from knowing about the guidance to putting the changes into practice
  • The lack of sustained change from these effort
  • The concerted effort on solutions which may not be the right ones for the particular problem they are proporting to address

Implementation researchers suggest that the time taken from research and guidance to the practical use of new safer methods can take from ten to seventeen years; speeding up this process feels like an important task.  The Sign up to Safety campaign team will be exploring the subject of implementation with our community, topic experts and expert practitioners in delivery of healthcare. Our hope is that by focusing on this area we will increase the NHS’s understanding and knowledge in this area and will help acquire new knowledge about how we can facilitate and enahce learning and sharing across the system.

Sign up to Safety is helping the NHS galvanise the field of patient safety – to move forward over the next fifteen years with a unified view of the future of patient safety to create a world where patients and those who care for them are free from avoidable harm.  Stopping or doing things in a very different way always sounds like such a big deal.  It sounds like something that should be approached in awe and done once or twice in a lifetime.

I would argue otherwise stopping or re(thinking) patient safety could be a very wise thing to do .

Re(think) Patient Safety Series – Blog 2

  1. Learning from incident reports and investigations
  2. Embedding the just culture for safety
  3. Delivering the right solutions and closing the implementation gap between theory and practice

Blog 2 continues the series on Re(think) Patient Safety which are covering the three issues above.  This blog will cover the issue of embedding the just culture for safety.

The single greatest impediment to error prevention is that we punish people for making mistakes” – Lucian Leape 1997 (in a speech to the US Congress)

There is a myth in patient safety – that we at some point advocated the ‘blame free’ approach to safety – in fact we only advocated the fair blame approach – some call the just culture for safety.  The thing is this is not well understood. There is an urgent and vital need for everyone to understand the just culture; where people are supported when things go wrong but that there is also accountability for risky and reckless behaviour. The lessons from the just culture community should inform our approach to regulation and human resources.  Human fallibility and human error are inevitable and we need to care for the people who are involved.  In fact there are three important things we should all agree on:

  • The best people can make the worst mistakes
  • Systems will never be perfect
  • Human beings will never be perfect

Agreeing on this will mean that we will be able to design our work processes based on a preoccupation with failure instead of counting the things we failed to stop; and one which support a resilient system where errors are kept small and where systems are designed to allow for human adaptability and flexibility.

If we have the right culture in healthcare we should always:

  • expect to operate on the wrong leg
  • expect to give the wrong dose of a drug
  • expect to give a patient the drug they are allergic to

….if we do that then we will embed a preoccupation with failure.  Which just might mean that we are safer because of it.  But if we keep perpetuating the fear of blame we will never be able to do this.  Let’s start with stopping with the avoidable versus unavoidable debate; Up and down the country there are all day meetings where individuals (mainly nurses) stop what they are doing and nervously trickle into a room clutching mounds of paper and sweaty hands to argue their case for whether the incident they were involved in was avoidable or unavoidable. Judge and jury sit and debate with people desperate to be put in the unavoidable camp.

– The Quote from Robert Montgomery comes to mind…Are you really listening or are you just waiting your turn to talk?

This at best is distracting and at worst perpetuates the blame and fear culture. People feel that they can’t deliver the bad news or problems without retribution. This approach is too simplistic and stifles people who need to share the problems in order to identify the potential learning and solutions.  Let’s be kinder to each other. Let’s be kinder to ourselves. Yes people are imperfect but they need help to get through life and work.  You have to fail a lot before you get it right but we don’t give people permission to admit this.  Judgement silences things, turns learning into a secret. Empathy enables that secret to be shared.  The just culture addresses all of these issues.  This means:

  • People who make an error (human error) are cared for and supported
  • People who don’t adhere to policies (risky behaviour) are asked first before being judged
  • People who intentionally put their patients or themselves at risk (reckless behaviour) are accountable for their actions

What can we do to re(think) the safety culture?

Review your current disciplinary policy.  Lets ensure it accounts for when incidents are investigated that there is an understanding of human factors and the just culture.  The key issue is to ensure that learning from the events outweigh the deterrent effect of punishment and your staff feel able to speak out, raise concerns and report incidents.

Re(think) Patient Safety Series – Blog 1

Over the last fifteen years I do believe that we have learned a lot about patient safety but I also I think we should press the pause button and before we press play or go forward for another fifteen years we should think about what we have learned and what we should do differently in the future.  I am constantly reading about patient safety, talking to others about patient safety, searching twitter and YouTube for talks and articles on patient safety – in the search of hope. Hope that I will find the solution to our shared problem.  Our shared problem is that we know that care should and could be safer.  I would like to share the three areas of research I have been focusing on.

  1. Learning from incident reports and investigations
  2. Embedding the just culture for safety
  3. Closing the implementation gap between theory and practice

The first in this series covers learning; in particular learning from incident reports and investigations.

I was listening to a webinar only just recently by the Director for the Centre for Quality Improvement and Patient Safety at the University of Toronto, Kaveh Shojania.  A third of the way through his presentation he said: “Incident reporting is the single biggest waste of time in the last 15 years and it is the most mistranslated intervention from aviation”  Now this is not new for Kaveh, (you can google his name and find an interview with him and Bob Wachter) but he said it in such a way that I heard it differently. So I reflected.  The general view is that the more incident reports there are, the safer the organisation is, that high numbers of incidents reported equals a good safety culture. There has been a determined drive to increase incident reporting as a result.

However, incident reporting has become an end in itself.  As a risk manager I all too clearly remember being stuck at the top of a building with piles and piles of incident reports – the relentless flow of paper reports arriving on my desk and the desperate attempts to keep up with inputting them into the database. I remember trying to analyse them to see what they were telling me about the state of the organisation but all I kept doing was counting them and I never really felt I did any of them justice.  Incident reports were supposed to help us fix things. However, for every incident, small failures often go unnoticed and unreported.  So we focus on pie charts and trend data of what gets reported and the numbers continue to rise year on year with consistently the same types of incidents reported from consistently the same settings and people.  It’s relatively easy to increase the reporting rates if you really want to but after a while people notice that they keep reporting the same things and no one appears to be actually fixing anything.

Kaveh talked about the wrong types of events getting reported.  He said that incident reporting systems are used to capture problems better suited to other strategies.  He talked about falls as an example and said there is almost no point in reporting a fall, that they are a common enough problem at every hospital that you should stop doing incident reports.  That you wouldn’t want to do a root cause analysis about a fall every time it happens.  And there isn’t a single great solution, so it’s not clear what an organisation is supposed to do after analysing one or more individual falls. If our aim is to quantify and capture numbers of incidents then fine let’s report every fall but if our aim is to learn from incidents then he may have a point.  Are we ready to move on from individual incident reports of high frequency?

If we were learning then we would not keep repeating the same thing from happening time and time again:

  • operating on the wrong leg
  • administering the wrong dose of a drug
  • giving a patient the drug they are allergic to

I recently put the term ‘ten times the dose’ into Google and numerous reports came up. One in particular was by a nurse called Kimberley who, just like me, worked in PICU although her story was 20 years later. As a result of a ‘ten times the dose incident’ one of the children in her care died. I read about Kimberley’s interview for the root cause analysis – she said the following:

I messed up, I’ve been giving calcium chloride for years, I was talking to someone while drawing it up, I miscalculated in my head, I will be more careful in the future

You can see she felt she was personally and solely culpable for this incident.  She also gave us clues as to why the incident happened; distraction, automaticity, calculating in her head.  I will be more careful in the future is not a recommendation for sustained change.  Kimberley was suspended from work and sadly she never got over this incident. A few months later she committed suicide as a result.  So why have we not fixed this?  Why is it possible to still make a ten times the dose error?

If we were learning then we would give the right treatment at the right time.  I was listening only the other day to Scott Morrish talk about his 3 year old son Sam who died as a result of sepsis. The investigation found that he died due to a delayed diagnosis and a delay in administering the prescribed antibiotics.  I sat there listening; I was expected to provide a response to his talk as part of a panel session that followed him and James Titcombe. Two dignified parents in the face of such personal trauma.  As I listened I reflected on Martin Bromiley, Jayne Zito, Chezelle Craig, Claire Bowen and many more, whose voices represent so many patients and loved ones who have died as a result of their treatment.  I reflected too on the many stories from nurses, doctors and others who had been affected by harmful events.  So many stories over the last fifteen years.  I was truly saddened and moved beyond words by this dignified man in front of me – but I was also hugely frustrated and angry.  It was unacceptable that Scott was there telling us his story.  It is unacceptable that these stories continue to be told.  It will be unacceptable that we don’t do something differently.  The thing is, our current approach to investigations is often filled with flaws:

  • They are usually carried out weeks and months after the incident
  • People don’t remember a lot of what happened, they forget what they did or what others did
  • Outcome, hindsight and confirmative bias all play their part to skew the truth
  • We make it really hard for the patients and the families involved who simply want answers.  We need to make it easier for them to get the right answers as soon as possible

In particular the outcome bias is strong – we judge the same incident differently depending upon the outcome.  The outcome effect occurs when the same behaviour produce[s] more ethical condemnation when it happen[s] to produce bad rather than good outcome, even if the outcome is determined by chance.  If a nurse makes an error that causes no harm we consider the nurse to be lucky.  If another nurse makes the same error resulting in injury to a patient we consider the nurse to be blameworthy and disciplinary action may follow –the more severe the outcome, the more blameworthy the person becomes.  This is a flawed system based upon the notion that we can totally control our outcomes.  Interestingly outcome bias has influenced our legal system..

–A drunk driver suffers far greater consequences for killing someone than merely damaging property, the drivers intent is the same, the outcome very different yet society has shaped the legal system around the severity of the crime

–What is worrying here is that the reckless individual who does not injure someone sometimes receives less punitive sanction than the merely erring individual who caused injury

In the brief window (usually within 24 hours) between incident and investigation lies one of the very few opportunities to learn about what actually happened.  Instead we investigate a long way in time from the actual incident and come up with a variety of recommendations for:

  • A new protocol
  • Training
  • Reminders
  • Suspensions and restrictions

Remember also the changes we put in place, however good or bad they are, erode over time – we are very good at focusing intensely on something for a short while but we all take our eyes off the ball and resort to our original habits and behaviours unless we make fundamental design changes to the system which makes it hard for people to revert to old habits.

What should we do differently?  What needs a re(think)?

For example, there are many alternatives or backups to incident reporting that could help: Safety huddles, changing the reports to high level data capture – so you only capture the headline of the incident – this would then lead to a triage process by someone who knows what an incident that needs following up looks like – and only that much small number would be interrogated for more data and only a smaller group yet would be investigated, case note review – or the use of trigger words or even clever automation or data mining software would also help but we are not there yet.

Re(think) patient safety series

The National Patient Safety Foundation in the US has commissioned an expert panel to assess the state of the patient safety field and to produce a report that would set the stage for the next 15 years of work. Tejal Gandhi the CEO of the NPSF announced on their website that they hoped that this would ‘galvanize the field to move forward over the next fifteen years with a unified view of the future of patient safety to create a world where patients and those who care for them are free from avoidable harm’.  If we were to assess the last fifteen years together in order to think about the next fifteen years what do you think we should be doing?  What would our unified view of the future of patient safety be?  I would start with three key areas which this re(think) series will cover:

  1. Creating the learning culture and in particular learning from incident reports and investigations
  2. Embedding the just culture for safety
  3. Identifying the right solutions and closing the implementation gap between theory and practice