Pages

Saturday 13 April 2024

 Long waits in A&E kill patients and NHSE denials are not an appropriate response



This is a long blog, sorry. But I wanted to document in more detail the key arguments about the excess deaths and the NHS response. 


The Royal College of Emergency Medicine has been campaigning strongly on its estimates of the mortality caused by long waits in A&E departments. RCEM recently updated their estimates using new data about how many long waits there were last year. The response from NHSE continued to be denial rather than action. 


Partly because the NHSE response continues to repeat what are, at best, extremely misleading ideas and, at worst, deliberately devious distractions from an important issue, I think it is worth a longer look at the topic to clarify where the numbers come from and whether the NHSE response is credible.


The background and history

The Royal College of Emergency Medicine attracted headlines in most media in early April 2024 (The Guardian, The Times, Sky News, The BBC, and even the Telegraph) with a new estimate of the number of excess deaths caused by long accident and emergency waits.


Their updated calculations suggested more than 250 extra deaths are occurring every week because of long A&E waits. They had released similar analysis in 2022 and their president, Adrian Boyle, had explained and defended their calculations in front of the House of Commons Health committee in January 2023 where Chris Hopson, the NHSE propaganda Strategy Director responded alongside a couple of other senior directors. Adrian Boyle did a good job but the NHSE response basically consisted of denial and diversion.


The key responses from NHSE in 2024 are largely the same. It is worth recording some of them as this will become important when I dig into the detail later.


Chris Hopson made 3 key points to the Health Committee:

“The first issue is the pressure on the urgent and emergency care pathway. We know that the NHS has been under an unprecedented degree of pressure on that pathway. We know that has led to significantly longer waits than we have seen before and we know that those longer waits are associated with poorer outcomes.”


“The second issue is that at the same time—Chair, you quoted these figures in your earlier questioning—we are seeing higher levels of excess deaths over the winter months. Those higher levels of excess deaths are not unusual. That will obviously reflect flu, cold weather snaps and covid….It is right, as I said when I did my interview, that experts at the ONS, supported by the chief medical office and working with the chief medical officer, continue to analyse the reasons for that higher level of excess death.”


“The third issue is that obviously, when you combine the two, which is the link between the pressures on the urgent and emergency care pathway and the higher levels of excess mortality, the widely quoted 300 to 500 a week figure that is, as you have heard, based on a study in the Emergency Medicine Journal suggests a link to delays in admitting patients from emergency departments and all-cause 30-day mortality. The key phrase is “suggests a link”. … that figure of 300 to 500 cannot be definitive and does not give a full and certain picture. That is why both I and our chief medical officer, Sir Steve Powis, said we did not recognise that figure, while … recognising that longer waits are associated with poorer outcomes.”


The press release in response to the 2024 RCEM estimates basically repeated shorter versions of these arguments. It added a claim that performance had turned the corner and was now improving That claim relied on a small improvement in 4hr performance in march when NHSE put a great deal of pressure on the system to meet a 76% interim improvement target. That target was still missed by 2% and the total number of 12hr waits in the year to march 2023 was down just 24k from the 1,733k in the previous year). You can judge for yourself whether that counts as a notable amount of improvement.


Those claims are problematic 

Even without examining the details of the RCEM calculation, it is easy to see why the NHSE response is deeply disingenuous. A charitable interpretation is that the senior directors at NHSE who have commented didn’t understand the RCEM analysis as presented at the Health Committee. But they had a year to do some homework since the data used by the RCEM had been published. And they have failed to change their responses in the 15 months since the committee hearing which undermines that explanation.


Hopson’s first point was that “pressure” (which I think means volume of patients) is causing poor performance. That pressure, he claims, leads to longer waits so it isn’t the NHS’s fault. But this point is directly contradicted by the analysis in the new UEC Strategy also released in January 2023 which clearly states “the number of attendances is not the thing primarily driving performance” (BTW, that admission represented a major reversal of NHSE strategy for improving emergency care which had for a decade sought to divert patients away from A&E to lower volumes in the hope it would improve performance despite multiple previous analyses saying it would not). Hopson’s claim that “We know that has led to significantly longer waits than we have seen before” is directly contradicted by his own strategy. To be fair, the new strategy was published a few days after the committee hearing, so perhaps he hadn’t read it yet. Right?


The second claim is basically that excess deaths in winter are normal and expected. Indeed the ONS weekly excess deaths statistics show that more people die in winter. The NHSE argument is basically “Nothing to see here, move along”. But this is either a deep misunderstanding of both the RCEM claim and the ONS excess deaths publication or a deliberate attempt to distract from the implications of the RCEM results. The unfortunate use of the same name “excess deaths” might contribute to some confusion but the details of how the RCEM reached that conclusion show that the only link is the name (I will explain more later when I show how the original estimate was done).


The third claim exploits the statistical caution of the RCEM and the original authors who knew they could not prove causality in a non-randomised study. But imagine trying to get a study where A&E arrivals were randomly allocated to different lengths of wait past any ethics committee. The question is how robust are the estimates from a good observational study given a randomised trial proving causality is impossible. I will come back to this when I explain how the original estimates were done. The key point is that NHSE have tried to avoid engaging with the detail of the estimates by dismissing the results as something they don’t recognise.


What was the basis for the original analysis by the RCEM?

The RCEM estimates are derived from a major study in the Emergency Medical Journal (EMJ) published in January 2022. To understand the basis of the RCEM calculations I first need to explain how the original EMJ paper was done (luckily, I’m a co-author).


The motivation for doing the EMJ study was twofold. One reason was to understand the extent that A&E performance was deteriorating and the consequences of that. The other was to provide some more concrete evidence about why having a 4hr target was so important. At the time the study was started, performance was declining and many were questioning whether the standard was merely an arbitrary management target or was based on measurable clinical criteria (the original standard was driven by clinical experts but a decade later many had forgotten this). 


I had realised that–now the patient level A&E data was reliable and the ONS kept a linkable dataset of deaths within 30 days of hospital discharge–it was possible to measure the mortality rate of groups of patients with different characteristics. In particular it would be possible to measure whether patients with long waits had higher mortality than those with shorter waits. Some studies in other countries had suggested that long waits did increase mortality for all patient types. But those other studies used less comprehensive data than the available data held by the NHS. We could do better.


The statisticians involved in the EMJ team realised that, in order to get unchallengeable results, it would be important to rule out some of the possible confounders. In particular, it feels intuitively obvious that sicker or older patients need longer treatment times in A&E (which would imply that the cause of higher mortality would be their clinical state, not the length of time they waited). On the other hand, almost no NHS patients waited longer than 4hr in 2010 which suggests that time spent in A&E is not itself caused by clinical need. Volume of patients had not changed dramatically since 2010, there were far more doctors but speed/performance had declined a lot. Nevertheless the statisticians wanted to have enough data to rule out confounders like morbidity and age. So the team chose to look only at admitted patients where the inpatient HES data gives far richer evidence on patient morbidity and than the A&E HES data. 


So that is what the study did. Two years (from 2016-2-18) worth of patient-level data from all English A&E admissions (about 5m admissions in total) was linked to the ONS 30-day mortality data enabling direct measurement of the mortality rates of patients with different characteristics, including how long they waited to be admitted. It is important to note that the study is not estimating mortality, it is estimating which factors are related to observed mortality. 


To cut out a lot of detail, the study showed that the waiting time before admission made a significant difference to the mortality rate even after adjusting for other possible confounders.


Crudely the overall mortality rate for admitted patients is about 8%. But for patients who wait between 8 and 12hr, that rises to nearly 10%. What the study estimates is how many extra deaths there are for patients with longer waits compared to the mortality for those who wait less than 4hr. In fact the mortality rises linearly for every extra hour waited beyond 4hr. For every 191 waits between 4 and 6 hr there is one extra death; for every 72 waits of 8-12hr there is an extra death. There were not enough >12hr waits to get a good estimate of the mortality there but, given the strong trend of higher mortality with longer waits, it is reasonable to conclude that it is higher for waits longer than 12hr. Obviously there are some error bars worth adding, but given the base data includes 5m individual patient records, there is a lot less uncertainly than you might think.


This chart from the paper summarises the relationship (SMR is the standardised mortality ratio):



The basic conclusion is that long waits before admission are associated with higher death rates even after considering patient morbidity. Since it isn’t an RCT, careful statisticians won’t claim they can prove causality, but this is a big study done carefully that comes as close to estimating causality as it is possible to get. It might technically be an association, but there are big flashing red lights hinting that the effect is real, significant and causal.


One other thing worth noting is that the study was not funded. NHSE didn’t pay, nor did any other body or think tank. Everyone involved gave their time freely because they recognised the importance of getting hard evidence. 


All the subsequent estimates by the RCEM and others are based on the mortality rates observed in the EMJ study updated to reflect the number of long waits in later years.


How does the RCEM turn numbers of long waits into estimates of excess deaths?

Most of the estimates of current excess deaths apply the results of a simpler grouping of waiting times and mortality in the EMJ paper to current counts of waiting times in A&Es.


For convenience the paper calculated NNH (number needed to harm) for 3 different groups of waiting times: 4-6 hr (191); 6-8hr (82); and 8-12hr (72). What the NNH means is that, for example, there is one extra death for every 191 patients waiting between 4 and 6 hours.


These can directly estimate excess deaths from the known numbers of patients waiting in those time bands. Assuming, of course, that the mortality rates have stayed similar to the rates in the period under study.


But the A&E statistics that are normally published don’t count the number of waits in those bands. Total waits >4hr is routinely published (that is the definition of the A&E target). But, due partially to a media furore in 2016 triggered by the increasing number of anecdotes about 12hr waits, NHS Digital did start publishing annual totals of 12hr waits. 


The following chart shows those totals (the red line is annual count of 12 hr waits) in the context of total major A&E attendance):





For context, in case this was not clear, in 2022/23 about 11% of all arrivals waited more than 12hr to leave the A&E. The target is for fewer than 5% to wait more than 4hr.


NHSE resisted publishing more details of 12hr waits for a long time. They didn’t relent until last year when monthly 12hr totals were included in the monthly performance numbers.


Knowing the annual total 12hr waits gives at least some basis for starting to estimate excess deaths from long waits. And the initial RCEM estimates were based on applying the EMJ mortality rates to the annual 12hr totals. 


That’s what the RCEM used. They assumed–as most experts assumed–that most 12hr waits were for admitted patients so the EMJ NNH number for 8-12hr waits could be used as a conservative estimate of the excess deaths for 12hr waits. Since the mortality estimated by the EMJ work increases every hour waited, this should give a conservative estimate of mortality rate for the group waiting >12hr. Their 2023 estimate was that between 300 and 500 extra deaths occurred every week from long waits.


Independent actuaries and statisticians have cast their expert eyes over these numbers and found them plausible. This Full Fact analysis from January 2023 has a good summary of their opinions of the original claim.


In 2024 they FOI’d the system for better data. It turned out that the assumption that most 12hr waits were for admitted patients were false, about 30% are discharged after their 12hr wait. The EMJ didn’t estimate the mortality for discharged patients so they excluded them to get a more reliable excess death estimate for the group waiting >12hr for admission. This still left a shocking but slightly lower estimate of an average of 250 deaths per week caused by long waits.


But note the conservatism of this estimate. It applies a mortality rate for the 8-12hr wait group to the >12hr wait group even though there is good reason to think it should be higher mortality for those longer waits. And it ignores any mortality for discharged patients not because there isn't likely to be any but because the EMJ paper didn’t estimate it. 


A similar FOI done about the same time as the RCEM one shed some further light on this and allows a different estimate. This is from an FOI by The Independent:


We can see in this the total attends based in different waiting times but also classified by whether the patient was admitted or not. The RCEM were right, only ⅔ of 12hr waits are admitted. But we also have the number of waits longer than 4hr and under 12hr (where slightly less than 30% are admitted). But this allows an additional excess deaths estimate based on the below 12hr waits. Even if we take the lowest mortality band from the EMJ study (NNH is 191 for waits between 4 and 6 hr) this suggests an extra 150 deaths per week. 


An additional comment is worth making. It is downright astonishing that so many patients wait 12hr only to be discharged. This alone should be a major indicator of an astonishing level of dysfunction in our A&E departments. 




The implications of the numbers and the NHSE response

If hundreds of deaths a week are occurring because patients are waiting too long to leave A&E that is surely one of the most significant and important problems for the NHS.


But the leadership in NHSE “don’t recognise the numbers”. And claim that it is the job of the ONS to calculate excess deaths. NHSE said this in too full fact (see the above link, highlights are mine):


“When asked on the BBC if he accepted that A&E delays have caused deaths, Professor Stephen Powis, National Medical Director of NHSE, said “it’s not unusual to see high levels of excess deaths in the winter”.

 

When pushed to give an NHSE estimate of deaths due to delays in A&E he said it is “very difficult to say'” but that it was “not for us at [NHSE] to produce those figures, [it’s] for the ONS and others to look into”.

 

However an ONS spokesperson told us: “We are not able to produce any analysis on deaths that are due to A&E delays. Our statistics are based on death registrations, so we analyse deaths (excess deaths in this case) based on information collected on the cause of death from the death registration.”


NHSE’s Chief Strategy Officer Chris Hopson also previously told the Today programme “a full and detailed look at the evidence…is now under way”, but we don’t have any further details of that work, or even know who is doing it.


In the press release to the march 2024 revision of the RCEM claims those responses were broadly repeated. 


But the statement by the ONS undermines the diversionary claim by NHSE that “excess deaths” is what the ONS do. But the ONS excess deaths analysis is unrelated to the EMJ analysis. The specific calculation of the relationship between waiting times and mortality requires NHS data the ONS doesn’t routinely analyse. But the same claim that this was the ONS’s job was repeated 15 months after the ONS denied it. Chris Hopson’s claim that “a full and detailed look at the evidence…is now under way would be a welcome development but no evidence has emerged in 15 months that this is happening.


This is particularly frustrating as NHSE’s own analysts are the only people who have access to all the data to repeat the EMJ analysis. If a competent analyst were asked to take a quick look at the data, they would have a quick approximate estimate of the credibility of the EMJ analysis within a week. Better than that, they could extend the analysis to include discharged patients which the EMJ analysis ignored. And they could use all the data since 2018 and update the estimates to test whether the problem was getting better or worse over time.


If the EMJ analysis lacks credibility or is downright wrong, NHSE could show why quickly by repeating the analysis themselves. There are several possible reasons why they have not done this. One is that the leadership doesn't understand just how easy it would be for their own analysts to do it. That is disturbingly plausible.But they could call any of the EMJ authors and ask. But, as far as I know, none of them have been contacted by NHSE. Another is that they are showing wilful blindness to the severity of the crisis in A&E. The worst explanation is that they have looked at the evidence and things are even worse than the EMJ estimated and they really don’t want to admit that.


The important issue is that while some other organisations could repeat the EMJ analysis (though more slowly and with older data) NHSE are the only organisation who could do a thorough job on up to date data. Despite a promise to “look into” the evidence made in january 2023, there is no evidence this has been done.


The importance of the results (statistics are a lot less compelling than single patient anecdotes)

The death of one man is a tragedy. The death of millions is a statistic. (falsely attributed to Stalin, actually a paraphrase of earlier work by Kurt Tucholsky).


The influence of media stories about bad things happening in the NHS is dominated by personal anecdotes. They work well in headlines and writing because they provide that personal link that strokes the strings of empathy. The handful of deaths caused by nurse Ruth Letby are given outsized impact because the media can report the personal stories from the families and staff. Even the scandal of Mid Staffordshire (potential deaths caused by poor practice estimated anywhere between hardly any and a thousand) are far more salient in the public mind because of the personal stories from some of the victims and their families.


But this distorts the perception of where big problems are. There are no such stories about the hundreds of excess deaths every week in A&E. At most we get stories about how awful it is to be stuck on a trolley for 12 hours. But we can’t identify the individuals who died early because of long waits as the weekly totals are merely statistics and it is impossible to separate the 8% of admissions who would have died with a 4hr wait from the extra 2% who died because of a long wait. 


The huge scale of the problem is a statistic and the media don’t treat it as a tragedy.


So, a lack of compelling personal anecdotes leaves public discussion of NHS problems deeply unbalanced. The NHSE leadership can’t use this as an excuse. They have a duty to understand which problems are biggest and the measure for that is the statistics not the anecdotes or the number of bad news stories in the media. If they don’t recognise the scale of the problem, they won’t devote the right amount of focussed effort to fix it.


Even conservative estimates of the excess deaths associated with long waits have them at 20k per year. That’s way more than the total number of deaths estimated from the scandalous NHS contaminated blood scandal. It is the same scale as the total estimated deaths from heart attacks caused by Merck’s Vioxx (rofecoxib) painkiller which forced them to withdraw the widely used drug.


But NHSE continues to deny the statistics. And, while the media in general have discussed it, it has not received anything like the emphasis as the stories containing personal anecdotes.


What should NHSE do?

To me there are a handful of key actions that are necessary:

  1. Immediately stop trying to deflect from the issue with weak excuses or spurious arguments.

  2. Repeat the EMJ study using the more recent data that NHSE have unique access to. Do it for recent data and for the 7 or so years of old data that would also cover the initial EMJ study. Also assess whether discharged patients see elevated mortality.

  3. Be open with the results so independent experts can either refute the EMJ claims or refine the claims. 

  4. If the EMJ results hold up, immediately rethink the priorities for where action is most urgently needed to improve the NHS and adopt a much tighter focus until the biggest problem is fixed.


According to an old Mark Twain pun, Denial isn’t just a river in Egypt. The NHS can’t afford an NHSE that is taking a whole riverboat cruise there.


Friday 9 February 2024

The NHS needs to redesign the metrics it uses for A&E performance


Getting patients through A&E in 4hr is a good goal which the NHS once achieved for the best part of a decade.. But the way this performance is calculated is a mess that needs serious revision if the system is ever going to achieve it again.


NHS performance data dump day happened early february and we got the numbers for performance up to january. They were mostly bad but we have become so attuned to bad performance they didn’t raise many eyebrows. And the combined panglossian might of the DH and NHSE press offices will undoubtedly manage to squeeze some positive messages from the detail.


We should ignore anything the press offices say. Not least because they will all be the first against the wall when the revolution comes.


And, apparently, NHSE are trying to get ministerial sign off for a new interim target for A&E performance to drive improvement. But the new target is to get 77% of patients out within 4hr, just 1% more than the current–shockingly unambitious–target of 76%.


They should be far more ambitious. And ministers should insist that the targets are redesigned as the current ones are as useful as the Fukushima nuclear power plant after the tsunami.


Here are some back of the envelope observations from the January numbers that show why major changes are needed.


The 4hr target and its problems

There isn’t anything fundamentally wrong with the 4hr target, despite what some anti-target thinkers claim. When it was first introduced many claimed it was purely an arbitrary management target and would distort clinical decisions. But this has been studied and it wasn’t true. Setting and enforcing the standard led to huge improvement.


Getting through an A&E quickly is good for the patient. And the original intent was to set a simple standard that would eliminate particularly dangerous long waits. The intuition behind this was good and we now have a great deal of evidence that long waits kill. In the biggest UK study mortality starts to be measurably larger with waits over 5hr and keeps rising with longer waits (for admitted patients). Other studies elsewhere see the same effect for discharged patients.


And, since >98% of patients did leave A&E in <4hr from 2005 to 2010 with far fewer A&E staff than the current levels, we have good evidence the target is achievable. 


But the problem with the current way the target is calculated arises because of two factors: current achievement is very poor and there are now different types of “A&E” that don’t work the same way and have very different performance.


Type 3 A&E units take about 30% of the total volume and have grown a lot in the last 15 years (some are called walk in centres (WICs), others minor injury units (MIUs) and urgent care centres (UCCs)). They don’t open 24hr a day and can’t handle major injuries or some specialist services. But, most importantly, they don’t usually have problems meeting the 4hr target and have very little impact on major A&Es unless they are co-located.


But the metric for A&E performance includes their performance even when the units have no meaningful relationship to the major A&E their performance is attributed to. When everyone’s performance is good, this doesn’t matter as the headline metric will clearly signal where there is a performance problem. But now that major A&Es often have performance below 50%, including UCC numbers create a huge opportunity for gaming and dilutes the signal identifying where the problems are.


Worse, they are not distributed evenly. Some hospitals have no attributable type 3 units; others have large numbers of them. This creates both inconsistency and an opportunity to game the headline number. In some cases hospitals have sought dodgy legal routes to “claim” control of type 3 units in order to hide how bad their persistently bad major A&E is. 


To see how prevalent this is look at this chart based on January 2024 numbers. 


The Royal Cornwall’s major A&E had a performance of just 41% but their headline performance nearly met the interim national standard once their (unrelated) type 3 performance ws included.


All the trusts in red are getting at least a 5 percentage point boost to their headline performance by including type 3 activity. IF their major A&Es were performing in the 90%s this would barely matter but only 3 trusts with big headline boosts are doing better than 65% on the major A&E performance. At those levels of performance, including type 3 activity gives a huge and unjustified boost to their headline number. For trusts in blue, the headline metric is a good approximation of their major A&E performance.


Another way of viewing this data is shown below in a chart that ranks how many points trusts headline performance is boosted by including type 3 activity:


It is hard to take a metric seriously when the headline numbers see so much adjustment from factors unrelated to the core point of having a target.


The solution is fairly simple. If we are trying to drive improvement, the reported metric should be for individual units and type 3 units should be kept separate from major type 1 units. (there is a slight complication in that, if the type 3 is co-located with a major A&E, they should probably be grouped together and this would affect some of the numbers above, but this isn’t that common). 


The performance problems are essentially all in type 1 units so a metric that focuses on only their performance should be used to identify and drive improvement. (Caveat: some clarification of definitions may be needed as well as some of the above numbers may include co-located type 3 units that should really be counted as part of the major A&E).


The problem of 12hr waits

There is another problem with using the 4hr metric to drive improvement. In its original formulation meeting the 4hr target virtually eliminated the possibility of very long waits. That is no longer true. If the standard time was set at 12hr not 4hr we would still be a long way from meeting it. Not only is the current NHS failing to get 95% of patients through A&E in 4hr, it isn’t even getting 90% through in 12hr. So driving improvement purely by looking at 4hr can miss the need to eliminate very long waits.


We have some evidence that 12hr waits continue to rise significantly while marginal improvements occur in the 4hr standard. This might suggest that some trusts are putting effort into the 4hr standard while neglecting patients who have missed it leaving them with very long waits. That is very much missing the point while pursuing the target.


While the 12hr performance is broadly related to the 4hr performance the detail suggests that some trusts are much worse at curtailing very long waits. This chart shows the overall relationship with an extra twist: it also analyses the proportion of >4hr waits that also wait >12hr (nationally about one third of 4hr breaches end up waiting >12hr but this ratio varies a lot).



So, instead of trying to set an interim target for 4hr performance it might be far more effective to start with a focus on those very long waits. Set and enforce a target for 12hr waits as the interim metric and return to 4hr only when 12hr waits have been eliminated. 


This will cause a problem for NHSE who have resisted publishing honest 12hr waits for nearly a decade (they were only forced to do so in feb 2023 because the minister insisted on it). But, given the scale of excess mortality from those long waits (which is probably in excess of 2k patients per month) this should be a major priority.


The problem of the 12hr wait after DTA metric

NHSE might object to using 12hr waits from arrival on the grounds that it already has a 12hr metric which has a long publication history. This is the longstanding 12hr wait after a decision to admit (commonly called the “trolley wait” target.)


But this metric is unreliable and gameable. This has long been known. The intent of the metric is to focus attention on long waits for admitted patients caused by delays finding a bed. The problem is that the decision to admit (DTA) is entirely gameable. Hospitals can delay the DTA if beds are scarce minimising the number of reported delays. Many patients have already waited 8-12hr by the time a DTA is made so the reported numbers seriously misrepresent long waits. The 12hr from arrival metric is, in contrast, not gameable. Historically we don’t have monthly data to compare both metrics. But annual numbers are published and the real 12hr waits have been more than 100 times higher than the 12hr DTA count. As overall performance has collapsed, that ratio has fallen and is now between 3 and 4. 


The analysis below shows the relationship at trust level between the 12hr after DTA metric and the 12hr from arrival metric. Note the variation across trusts and the fact that some trusts with a large number of 12hr from arrival waits have almost no 12hr from DTA waits.



The DTA metric is unreliable and should be replaced with the far more reliable 12hr from arrival metric.


Conclusions

There is a huge problem in how NHSE have tried to improve A&E performance and the metrics they have used is only a part of the problem. NHSE strategy was entirely focussed on the wrong causes of poor performance for a decade. And, even though the current UEC strategy (published in January 2023) admitted that mistake, NHSE still seem bereft of focus on the underlying operational problems causing poor performance. And their process improvement methods seem ricketty with little grip and few incentives to drive improvement.


But the whole process of driving improvement–even if it were effective–would be undermined by metrics that fail to correctly identify where performance is poor. Better metrics won’t fix the performance, but at least they could stop actively undermining the process.


[added after original posting] PS One additional problem I forgot to mention in the first draft of this is that the current data is reported at Trust, not site, level. Many trusts run multiple type 1 A&Es but there is no public data on the site-level performance despite many trust haveing sites with very different performance. It would be good for both the public and the internal ability of the system to understand performance differences if all reporting was changed to be site, not trust, specific. The argument for not doing this is that trusts are the legally responsible body for performance. I'd say, screw the legal niceties, we need the better, more specific, data to get a grip on performance and to be honest with the public.]