Pages

Thursday, 16 July 2015

Reporting frequency isn’t the problem. Failing to derive insight from the data is.

NHS England plan to stop releasing A&E performance numbers weekly. This is a bad mistake. The real problem isn’t inconsistencies in the way different performance is reported but a persistent failure to derive insights from data.

It looks like NHS England will stop publishing weekly performance data for A&E departments in England from next month. According to their website:

Following the recommendations from Sir Bruce Keogh’s review of waiting time standards, statistics on A&E attendances and emergency admissions from July 2015 onwards will be published monthly rather than weekly, with the last weekly publication being for week ending 28 June 2015.

The reasons, according to Bruce Keogh’s report are:

Current arrangements for reporting performance are extremely uncoordinated. Standards report with different frequencies (weekly, monthly and quarterly) and on different days of the week. This makes no sense - it creates distraction and confusion. We receive feedback that this makes it difficult for people to have one transparent, coherent picture of performance at any one time.

My recommendation is therefore that we standardise reporting arrangements so that performance statistics for A&E, RTT, cancer, diagnostics, ambulances, 111 and delayed transfers of care are all published on one day each month.

While I’m all for having a “transparent, coherent picture of performance” what is being proposed addresses the wrong part of the problem. The good way to use data is to derive insight into what the underlying problem actually is. That should help focus improvement programmes on interventions that are likely to work and away from interventions that feel good but are irrelevant to the real problem. Making A&E reporting consistent with RTT reporting isn’t going to help anyone gain insight into their problems.

How performance data is misused

To be fair to Keogh there are many ways that performance targets can be misused or can have perverse consequences.

The old targets for hospital waiting times had some very bad side-effects. They penalised hospitals for treating long waiters leaving many spending more management effort in trying to minimise the number of long waiters treated than they spent on trying to speed up the overall treatment process. This is because every treated patient who had already breached 18 weeks would count against the reported performance so it was better to keep them waiting longer and “drip feed” them into treatment at a rate that wouldn’t breach the target performance level. What a perverse waste of management time.

And there is a tendency to overreact to noise. When you have a weekly target there is often immediate pressure to “do something” when you breach the standard. But doing something without a good understanding of the causes of the problem is worse than doing nothing (a famous Deeming experiment to illustrate this is explained here). As the blogger squiretothegiants says in explaining Deeming:

The point is to understand the system and the reasons for variation. Then (and only then) you can make meaningful changes instead of merely tampering.

The performance management processes of the NHS positively encourages tampering to the detriment of effective improvement. Nigel Edwards summarised this very effectively in a rant earlier this year in the HSJ.

However, just because performance measures are often misused doesn’t mean we should change the measures, especially if they can provide useful insights into the causes of poor performance.

What insights can be derived from weekly A&E data

There are insights in the weekly data. Those insights tend to contradict the majority of popular myths about why problems exist. It is widely assumed, for example, that A&Es are being swamped by a wave of demand. Or they are taking up slack from people who should be treated by GPs but no longer are since they abandoned out of hours provision. Or that too many of the wrong sort of patient with trivial injuries are turning up. And so on.

None of these popular ideas are compatible with the basic simple statistics of the weekly A&E performance reports.

For a start, there has been no sudden change in the number of people attending A&E. Here is a chart showing the weekly attendance and performance for each type of department.


There has been no sudden change in attendance (in fact, though the weekly data hasn’t been publicly released, this is true over the last two decades in major A&Es with steady growth of 1-2% per year overlaid on a noisy pattern week to week with summers being busier than winters). It is also worth noting that all the problems with performance are in major A&E (type 1 departments). The opening of many new walk in centres and minor injury units has generated new demand but has had no notable impact on attendance or performance at the major departments.

Here is a more detailed illustration of the recent trends from hospitals in the Manchester area.


The lines here show the weekly attendance versus the average week in 2011. The numbers at the end of the lines show the latest quarter versus the average in 2011. It is worth noting that two out of eight trusts have seen volume falls.

More significantly, if you compare the last chart to the weekly performance shown below, you will see that there is no notable relationship between the volume and performance trends (note that Salford Royal has the largest increase in volume but the best and most consistent performance). The numbers to the right are the latest quarter volumes versus 2011 as in the previous chart.


The lack of any relationship between volume and performance is one of the clearest results emerging from the analysis of weekly data and one of the most ignored in policy. Probably the majority of money spent in the last few years to avert the regular winter crisis has been spent trying to divert patients from A&E. But the weekly statistics show that it isn’t the attendance that is the problem. But there is a significant relationship between performance and the number of admissions. The chart below shows both attendance and admissions against performance for all major A&Es in the weekly dataset. Each dot is a single week.


This relationship strongly suggests that the core problem in A&E performance is focussed on the group of patients who need a bed. Other evidence (eg from analysis of HES data which allows us to see how long different types of patients wait for treatment, discharge and admission strongly supports the idea that the core problem is related to admission). This might suggest that there has been a significant change in patient morbidity, though analysis of A&E HES data (eg this from the Nuffield Trust) hasn’t confirmed this idea. Unfortunately admission thresholds in trusts are fungible and not a good indicator of whether patients need to be admitted.

Whatever the reason for the increasing number of admissions, even the weekly data clearly points to problems admitting patients quickly being the core issue. Moreover, the problems are entirely inside the major A&E departments and not in other parts of the system like walk in centres (indeed, a key message is that diverting patients away from major departments is irrelevant to system performance negating the supposed benefit Keogh expects from a focus on whole-system metrics). Investing effort elsewhere is a waste of effort and money. Unfortunately, despite this clear message from the weekly statistics, this has not been the focus for action.

So What?

The key point I want to make is that the weekly data is a rich source for testing and monitoring ideas for improving A&E performance. But the key messages have mostly been ignored. Far too much of the performance improvement effort has focussed on the noise in the weekly data (as Nigel Edwards pointed out) and far too little on the long term patterns revealed by looking at all the data.

The idea that we should look at only monthly data to help people see “whole system” performance is a mistake as it makes it harder to see these key patterns that clearly point to the problem being in a particular part of the system and not in the system as a whole. The problem isn’t inconsistent ways of reporting performance being confusing; it is that the clear messages in the data have been ignored leading to improvement initiatives driven by anecdote not analysis. We have, as a result, invested a great deal of money in initiatives that were never going to work.

The few independent analysts (eg @GMDonald whose interactive analysis of the weekly data is on Tableau Public here) who have sought to look for the key patterns in the data will no longer find those patterns so easy to reveal. And even if the leadership in NHS England found a new capability to seek insight from data, they too would now find that insight harder to come by.

The problem has never been that the weekly data is misleading; it is that it clearly leads but few have ever bothered to follow that path.

No comments:

Post a Comment