Pages

Friday, 24 January 2025

NHSE took the blue pill


In the movie The Matrix, where people live in a fake virtual world, Morpheus offers Neo a choice. He says, “You take the blue pill... the story ends, you wake up in your bed and believe whatever you want to believe.” But taking the red pill means you wake up in the horrifying real world and have to face the consequences.


As far as I can tell NHSE took the blue pill.


They seem committed to ignoring the horrible real world of NHS problems and, as a result, seem incapable of fixing the problems.


For example, look at the draft plan for emergency care.


NHSE’s leaked draft plan demonstrates only one thing: they don’t understand how to fix emergency care. They are still stuck in the Matrix.


There are not enough anglo saxon profanities that can express how much contempt I have for their draft plan.


According to the HSJ it proposed the following actions:


  1. Improving vaccination rates and targeted preventative winter virus care.

  2. Reducing 111 calls put through to 999 or directed to ED

  3. Improve Hear & Treat, See & Treat, and Reduce Avoidable Conveyances

  4. Reducing ambulance handover delays

  5. Rapid triage at the front door to navigate patients quickly to the right care and avoid admission wherever possible

  6. Getting into a hospital bed more quickly for those who need one

  7. Improving access to specialist out-of-hospital provision

  8. Shorter Length of Stay

  9. Reduce discharge delays

  10. Standardising and scaling the six core components of neighbourhood health

This is pure blue-pill thinking.


It is so wrong on so many levels that I cannot hurl enough abuse at it.


To see why, let’s start with two things necessary for good, effective strategies. A good diagnosis of the biggest cause of the problem (you can only do this if you take the red pill). And a strong focus on the actions that will tackle that problem. I’ve pointed this out a lot. Nobody in NHSE, it seems, is listening.


The symptom of the problem is that far too many patients are waiting for excessively long times to get through A&E departments. There were 1.7m waits longer than 12hr last year when there should have been fewer than 0.8m waits longer than 4hr in major A&Es. It isn’t even clear that the NHSE leadership know that the problem is in major A&Es (if this speech by Steve Powis, the Medical Director, is any guide–for critical analysis of what he said see this Bluesky thread).


Worse, the leadership have fought against any admission of the most important consequences of those long waits, a large number of excess deaths (the EMJ analysis of this was published in 2022 but the leadership denied the credibility of the results; the ONS update of that analysis published last week suggests the problem is much worse but the presentation of their results omitted relevant data as if to avoid easy comparison with the EMJ estimates). NHSE overdosed on blue pills here.


It isn’t as if the dominant cause of long waits and poor A&E performance has not been analyzed before. The team that originally delivered 98% of waits in under 4hr in the early 2000s said the primary problem was flow through beds. When the post-Lansley performance declined rapidly, several thorough analyses said exactly the same thing in 2015 (eg this). As part of the new strategy for fixing emergency care in January 2023 NHSE repeated these analyses and reached the same result. The primary cause is poor flow through beds. Attendance is irrelevant. (it might be worth noting that NHSE had to be strongarmed into doing or publishing that analysis by No. 10. It wasn’t just that they didn’t volunteer to do a root cause analysis before trying to develop a solution, they resisted doing the analysis. They wanted more blue pills.)


So, in principle, NHSE know these two truths about the causes of the problem:

  • Attendance doesn’t matter and isn’t the cause

  • The problem is flow through beds


These are the biggies. Some debate often claims other factors are issues, like staffing. But we have analysis of most of them and they are not big contributors to the problem (staffing levels, like attendance, have no relationship to performance across a very wide range of different staffing levels in different departments.)


So how do the 10 points in the draft plan stack up against those two, critical, facts? At least for those who didn’t take too many blue pills.


Five of the points are about reducing attendance. Some are perfectly reasonable actions that would achieve good things (who wants lower vaccination rates?). None are remotely relevant to the problem of improving A&E. None would contribute a female gnat’s testicles of better performance in A&E.


The other points might superficially look like they are dealing with the problem: Reduce ambulance handover delays; speed triage; lower LOS; speed admission to beds; reduce discharge delays. But all are either restatements of the problem or demands for improvements in the metrics that measure the problem. None are actions that might tackle the problem.


Apparently the strategy is to ignore the problem. And that is the entire strategy. The blue pills have done their work.


Fuck me this is bad.


Even if the draft is merely an outline and there is some document with scores of pages of concrete ideas behind each bullet point, this is a truly bad place to start. And the track record of those longer “strategy” documents does not suggest that longer is better. 


Here are the first steps I would take to generate improvement. Take the entire team responsible for writing or commissioning this draft and let them take blue pills so they can drink fake wine in a fake restaurant serving excellent perfectly cooked steaks. Maybe they can write off the cost of the fake reality as part of the ambitious plan to do more AI.


And find another team willing to take the red pills and solve the actual problems that continue to exist in the real world.


Sunday, 19 January 2025

Making sense of the new ONS estimates on A&E waiting times and mortality



Long waits in A&E kill patients. A new analysis of mortality and A&E waits by the ONS–despite issues in the analysis and presentation of the results–makes this look like an even bigger problem than previous analyses.


In January 2025 the Office of National Statistics (ONS) released a new analysis of the relationship between A&E waiting times and mortality. 


This is an important study because understanding when NHS performance is killing patients unnecessarily is a major indicator of where the system’s biggest and most important problems are.


But the results will be more contended and confusing than they needed to be because the ONS have presented them badly and have omitted some key data that make the importance of the results harder to judge and harder to compare with previous analysis.


This note is an attempt to explain the significance of the ONS results while also suggesting some of the improvements that could be made to make the results more useful.


The background to this analysis

The ONS are not the first to attempt to estimate the excess mortality caused by long waits. A previous analysis (of which I was a co-author) was published in 2022 in the Emergency Medical Journal and also used NHS patient-level data to derive reliable estimates of the relationship between long waits and mortality. 


This work was partly inspired by a previous Canadian study which also concluded that longer waits substantially increase mortality but with less reliable data on length of wait (the UK studies use data that contains the wait for individual patients).


The EMJ study, which has been extensively used in campaigns by the Royal College of Emergency Medicine (RCEM) to highlight the apocalyptic state of English A&E departments, used comprehensive data from april 2016 to march 2018 but only for admitted patients. Conservative estimates based on this study suggest that long waits cause between 10,000 and 20,000 extra deaths every year. The EMJ study did not estimate mortality for waits longer than 12hr due to the small numbers (which were below 2% of attendance in the period; they are over 10% now). The RCEM derived the excess deaths estimates using total published numbers for 12hr waits and the EMJ estimate of mortality for shorter waits of 8-12hr. Plus, recent estimates by the RCEM of excess deaths number were smaller than their original estimates as the EMJ data only covered admitted patients but recent NHS data says about one third of 12hr waits were discharged (an astounding statistic by itself) and the EMJ analysis did not estimate mortality for discharged patients.


It is notable that NHSE leadership’s response to the original publication was to dismiss the results. It is well worth reading the evidence session to the House of Commons Health Committee where Adrian Boyle of the RCEM presented the case and NHSE leaders dismissed it.


One argument too easily used to dismiss the importance of waiting times is that many other factors also influence mortality and some of them also increase waiting times, making describing the part of the excess mortality attributable just to long waits complex (though the EMJ paper went to great length to do adjustments and still concluded that waits were a big factor.


When rumours emerged that the ONS were doing an updated version of the analysis, there was some hope that it might swing the debate so the NHS would pay more attention to the problem. But the way the ONS chose to present their results blunted some of their possible impact as we shall see.


What the ONS did

The ONS created a cohort to analyse based on data from three datasets: the 2021 census (for demographic information); the ONS death registration data; and the complete NHS patient level data about all attendances to major (type 1) A&E departments for the financial year ending in march 2022.


The link to census data allows adjustments to expected death rates based on factors recorded in the census. The death registration data allows actual death rates to be analysed (the specific mortality metric is deaths within 30 days of hospital discharge). The A&E attendance data allows the analysis to cover all A&E attendances in the data. 


One important fact to note (in principle) is that this analysis is not based on sample data but on actual data. The total number of deaths is not an estimate but a count of actual deaths (so removing a major potential source of statistical uncertainty). This is also true of the EMJ analysis. There can be minor data quality issues because of poor data recording. For example, not all the patients in the A&E data can be matched to the other datasets (but this misses fewer than 5% of the total so should not be a big issue).


In short this should be a very high quality dataset leading to very reliable results.


In presenting the evidence the ONS chose to mostly present the adjusted data (so the mortality differences are adjusted to take account of multiple factors other than waiting times that also influence mortality). The EMJ paper also did this adjustment but using a completely different method.


Only one dataset in the ONS release does not adjust mortality for other factors. But they did not describe their adjustments in detail or how many patients were omitted from the final cohort totals (this will be a big issue as described later). This lack of detail does not mean their results are not notable or important but it does create some opportunities to cast doubt on the conclusions (many of which will be unfair or downright wrong but the ONS could have avoided the potential for criticism if they had provided more detail).


What were the key results?

I’m going to go through the key results and present many of them as charts which are a lot easier to understand than the raw tables released by the ONS. In the next section I will describe some of the issues that could have been avoided if the ONS had released additional data they must have to have been able to derive the analysis they have done.


The first chart here presents the raw data in the cohort they used for analysis:



The bar chart shows the total number of patients who waited different amounts of time to leave the A&E. The data counts the total attendances and the total 30-day deaths for each waiting time. The chart shows the raw analysis as the proportion of people in each waiting time group who died. Below 4hr the raw mortality rate is <0.5% for all arrivals. By the time waits are 12hr long that number is about 5%. That’s a big increase but hard to interpret because, for example, perhaps the cohort waiting over 12hr contains far more old people who are far more likely to die. Other analyses have adjusted for many such effects.


The total number of patients covered in the chart is about 6.7m.


This, as will be discussed later, is less than half the number recorded as attending major A&Es in the same time period which was about 16.1m. Where did the missing patients go? The ONS don’t explain. So, is the cohort representative of the attendance? Probably. This table shows the stats on grouped waiting times for the ONS cohort:



So the reported public waiting time statistics for this year are close to the ONS cohort despite the cohort being less than half the size of the total attends.


It is also worth noting that the reported proportion of wais over 12hr is close to the official annual reported number (close to 5.8%). This proportion has more than doubled since the year this analysis was done (december 2024, for example, had more than 10% of all attendance waiting more than 12hr.


All the other tables reported by the ONS don’t cite raw numbers but, rather, odds ratios after extensive adjustments to account for confounders. The details are not given (which may cause some complaints).


In most cases the odds ratio describes the probability of mortality for a particular waiting time group relative to the chosen comparison waiting time (which, I think, means the group labelled 2hr). The waiting groups, technically, mean all the waits that round down to the number. So the group labelled 2hr means all patients waiting between 2hr and 2hr 59mins.


The important message which stands out–whatever the method–is that long waits are bad for mortality in every subgroup even after extensive adjustment for other factors. Sometimes the effect of long waits is very bad. This is a stronger result than the EMJ analysis. 


So what do those results look like?


This is the analysis by admission status:



This shows the different effect of long waits on mortality for admitted patients and discharged patients. Mortality for admitted patients clearly rises with longer waits and rises by about 30-40% for waits of 8-12hr (not grossly different from the analysis in the EMJ paper).


The mortality rate rises far faster for discharged patients, with the rate nearly doubled for 8hr waits and tripled for 12hr waits. This is important as previous estimates of excess deaths ignored mortality in discharged patients.


But we can’t judge from this data whether mortality is worse for discharged patients because the base mortality isn’t shown for either group (this observation applies to all the odds-ratio data presented by the ONS and is a big issue when trying to judge the importance of some of the results). We know that admitted patients are perhaps 10 times more likely to die than discharged patients so a small increase in their mortality means more deaths than a similar increase in the mortality of discharged patients. 


The message would be far stronger if the ONS released the additional data they must have for the base mortality rates and the number of patients in each wait group (then we could calculate the total excess deaths easily as has been done with the EMJ analysis). 


But I don’t want to undermine the message that still stands out in this analysis: long waits are bad for patient mortality even when you adjust for all the possible confounding factors.


The ONS also analysed the effect on the mortality in different age groups:

Again, the mortality mostly rises with waits over 4hr, sometimes by a lot. Again it is hard to judge how many deaths this adds up to because we don’t know the base rates for any group. And there is the strange pattern for waits for the young (I think the band labelled “20” means anyone under 20). But this might be a product of having very few long waits in that cohort. Also, children’s A&Es have far better waiting performance than others.


This possible explanation is reinforced by showing the age odds ratios with the statistical confidence intervals:



Note that in this chart the scales for each cohort are different to accommodate the large 95% confidence intervals and very different odds ratios for some groups. Those intervals are strongly driven by the sample size so very wide intervals imply a small and potentially unreliable sample.


The ONS also analysed the odds ratios by the primary complaint at arrival. This is that chart:



There are some odd anomalies here, mostly in groups which probably have small sample sizes. 


The results are clearer if we stick to a single time cohort and compare the odds ratios for each condition. 




The highest risk increases are in the groups with the widest error bars so might be unreliable (again if the ONS gave us the raw cohort sizes we could make a better judgement).


But the key message remains unchanged. At 12hr the mortality risk is between 50% and 400% higher than at 2hr even for conditions where the confidence intervals are tight.


Problems with the ONS data

While the key message of the analyses are clear, there are problems in how the ONS have chosen to present the results and one major potential issue with the data. Both might be used to undermine the results but are also easy for the ONS to fix without doing any more analysis.


The biggest issue is the size of the total cohort. The ONS claims to have a near-comprehensive dataset of people attending A&E. They claim to have omitted some data but hint that this didn’t cause large numbers of omissions. But their complete cohort only has 6.7m patients when about 16.1m attended A&E in the period. That’s a big gap.


It may not matter as the 6.7 m seems to be fairly representative of all attendances. But the failure to report where the missing records went is annoying. It might even be an error caused by their unfamiliarity with A&E data. Good A&E analysts will have approximate numbers of total attendance in their heads and will instantly wonder, as I did, why the sample is only 6.7m big. It is possible that the ONS accidentally omitted a big chunk of their data and nobody noticed the gap and, therefore, didn’t see that there was anything to explain. They claim to have collaborated with the RCEM, DHSC and NHSE but someone there should have noticed this gap (though, cynically, I might question the motivation of NHSE to correct errors given their track record in downright denying the EMJ analysis).


The other problems with the analysis as presented is that it omits the information needed to translate the analysis into simpler, starker counts of the number of excess deaths (a number that seems to have been very effective in getting journalistic and public attention).


This could be easily fixed without further analysis. The ONS could simply provide the actual base rates and cohort sizes for each analysis. As the analysis currently stands we don’t know, for example, the number of 80 year olds in the sample or the proportion of the attenders who were admitted.


So we can tell that the rate of death rises with longer waits but not the number of deaths that leads to. 


Conclusion

The key message should not be ignored. Long waits kill patients. 


This is particularly important given that the number of long waits has risen rapidly and is still rising. The annual total waiting more than 12hr is ten times longer than when the EMJ analysis was done and has more than doubled since the time covered in the ONS analysis. 


But A&E performance is not a current top NHSE priority. And what is being set as the performance target by NHSE is based on an unambitious goal for 4hr performance. Some of us and the RCEM have suggested that A&E performance should be as important, if not more important, an improvement goal as elective waits. And that the first target for A&E performance should be to eliminate 12hr waits not to make minor improvements in 4hr waits.


I’m sure that the response to these results will contain a phrase something like “long A&E waits are completely unacceptable” perhaps accompanied by “everyone is trying extremely hard to improve A&E performance”. This is the stock answer when the consequences of A&E crowding hit the headlines. But, as Yoda said in Star Wars: ”There is no try: there is only do or not do”. Right now there is a lot of trying but not much doing.


Saturday, 13 April 2024

 Long waits in A&E kill patients and NHSE denials are not an appropriate response



This is a long blog, sorry. But I wanted to document in more detail the key arguments about the excess deaths and the NHS response. 


The Royal College of Emergency Medicine has been campaigning strongly on its estimates of the mortality caused by long waits in A&E departments. RCEM recently updated their estimates using new data about how many long waits there were last year. The response from NHSE continued to be denial rather than action. 


Partly because the NHSE response continues to repeat what are, at best, extremely misleading ideas and, at worst, deliberately devious distractions from an important issue, I think it is worth a longer look at the topic to clarify where the numbers come from and whether the NHSE response is credible.


The background and history

The Royal College of Emergency Medicine attracted headlines in most media in early April 2024 (The Guardian, The Times, Sky News, The BBC, and even the Telegraph) with a new estimate of the number of excess deaths caused by long accident and emergency waits.


Their updated calculations suggested more than 250 extra deaths are occurring every week because of long A&E waits. They had released similar analysis in 2022 and their president, Adrian Boyle, had explained and defended their calculations in front of the House of Commons Health committee in January 2023 where Chris Hopson, the NHSE propaganda Strategy Director responded alongside a couple of other senior directors. Adrian Boyle did a good job but the NHSE response basically consisted of denial and diversion.


The key responses from NHSE in 2024 are largely the same. It is worth recording some of them as this will become important when I dig into the detail later.


Chris Hopson made 3 key points to the Health Committee:

“The first issue is the pressure on the urgent and emergency care pathway. We know that the NHS has been under an unprecedented degree of pressure on that pathway. We know that has led to significantly longer waits than we have seen before and we know that those longer waits are associated with poorer outcomes.”


“The second issue is that at the same time—Chair, you quoted these figures in your earlier questioning—we are seeing higher levels of excess deaths over the winter months. Those higher levels of excess deaths are not unusual. That will obviously reflect flu, cold weather snaps and covid….It is right, as I said when I did my interview, that experts at the ONS, supported by the chief medical office and working with the chief medical officer, continue to analyse the reasons for that higher level of excess death.”


“The third issue is that obviously, when you combine the two, which is the link between the pressures on the urgent and emergency care pathway and the higher levels of excess mortality, the widely quoted 300 to 500 a week figure that is, as you have heard, based on a study in the Emergency Medicine Journal suggests a link to delays in admitting patients from emergency departments and all-cause 30-day mortality. The key phrase is “suggests a link”. … that figure of 300 to 500 cannot be definitive and does not give a full and certain picture. That is why both I and our chief medical officer, Sir Steve Powis, said we did not recognise that figure, while … recognising that longer waits are associated with poorer outcomes.”


The press release in response to the 2024 RCEM estimates basically repeated shorter versions of these arguments. It added a claim that performance had turned the corner and was now improving That claim relied on a small improvement in 4hr performance in march when NHSE put a great deal of pressure on the system to meet a 76% interim improvement target. That target was still missed by 2% and the total number of 12hr waits in the year to march 2023 was down just 24k from the 1,733k in the previous year). You can judge for yourself whether that counts as a notable amount of improvement.


Those claims are problematic 

Even without examining the details of the RCEM calculation, it is easy to see why the NHSE response is deeply disingenuous. A charitable interpretation is that the senior directors at NHSE who have commented didn’t understand the RCEM analysis as presented at the Health Committee. But they had a year to do some homework since the data used by the RCEM had been published. And they have failed to change their responses in the 15 months since the committee hearing which undermines that explanation.


Hopson’s first point was that “pressure” (which I think means volume of patients) is causing poor performance. That pressure, he claims, leads to longer waits so it isn’t the NHS’s fault. But this point is directly contradicted by the analysis in the new UEC Strategy also released in January 2023 which clearly states “the number of attendances is not the thing primarily driving performance” (BTW, that admission represented a major reversal of NHSE strategy for improving emergency care which had for a decade sought to divert patients away from A&E to lower volumes in the hope it would improve performance despite multiple previous analyses saying it would not). Hopson’s claim that “We know that has led to significantly longer waits than we have seen before” is directly contradicted by his own strategy. To be fair, the new strategy was published a few days after the committee hearing, so perhaps he hadn’t read it yet. Right?


The second claim is basically that excess deaths in winter are normal and expected. Indeed the ONS weekly excess deaths statistics show that more people die in winter. The NHSE argument is basically “Nothing to see here, move along”. But this is either a deep misunderstanding of both the RCEM claim and the ONS excess deaths publication or a deliberate attempt to distract from the implications of the RCEM results. The unfortunate use of the same name “excess deaths” might contribute to some confusion but the details of how the RCEM reached that conclusion show that the only link is the name (I will explain more later when I show how the original estimate was done).


The third claim exploits the statistical caution of the RCEM and the original authors who knew they could not prove causality in a non-randomised study. But imagine trying to get a study where A&E arrivals were randomly allocated to different lengths of wait past any ethics committee. The question is how robust are the estimates from a good observational study given a randomised trial proving causality is impossible. I will come back to this when I explain how the original estimates were done. The key point is that NHSE have tried to avoid engaging with the detail of the estimates by dismissing the results as something they don’t recognise.


What was the basis for the original analysis by the RCEM?

The RCEM estimates are derived from a major study in the Emergency Medical Journal (EMJ) published in January 2022. To understand the basis of the RCEM calculations I first need to explain how the original EMJ paper was done (luckily, I’m a co-author).


The motivation for doing the EMJ study was twofold. One reason was to understand the extent that A&E performance was deteriorating and the consequences of that. The other was to provide some more concrete evidence about why having a 4hr target was so important. At the time the study was started, performance was declining and many were questioning whether the standard was merely an arbitrary management target or was based on measurable clinical criteria (the original standard was driven by clinical experts but a decade later many had forgotten this). 


I had realised that–now the patient level A&E data was reliable and the ONS kept a linkable dataset of deaths within 30 days of hospital discharge–it was possible to measure the mortality rate of groups of patients with different characteristics. In particular it would be possible to measure whether patients with long waits had higher mortality than those with shorter waits. Some studies in other countries had suggested that long waits did increase mortality for all patient types. But those other studies used less comprehensive data than the available data held by the NHS. We could do better.


The statisticians involved in the EMJ team realised that, in order to get unchallengeable results, it would be important to rule out some of the possible confounders. In particular, it feels intuitively obvious that sicker or older patients need longer treatment times in A&E (which would imply that the cause of higher mortality would be their clinical state, not the length of time they waited). On the other hand, almost no NHS patients waited longer than 4hr in 2010 which suggests that time spent in A&E is not itself caused by clinical need. Volume of patients had not changed dramatically since 2010, there were far more doctors but speed/performance had declined a lot. Nevertheless the statisticians wanted to have enough data to rule out confounders like morbidity and age. So the team chose to look only at admitted patients where the inpatient HES data gives far richer evidence on patient morbidity and than the A&E HES data. 


So that is what the study did. Two years (from 2016-2018) worth of patient-level data from all English A&E admissions (about 5m admissions in total) was linked to the ONS 30-day mortality data enabling direct measurement of the mortality rates of patients with different characteristics, including how long they waited to be admitted. It is important to note that the study is not estimating mortality, it is estimating which factors are related to observed mortality. 


To cut out a lot of detail, the study showed that the waiting time before admission made a significant difference to the mortality rate even after adjusting for other possible confounders.


Crudely the overall mortality rate for admitted patients is about 8%. But for patients who wait between 8 and 12hr, that rises to nearly 10%. What the study estimates is how many extra deaths there are for patients with longer waits compared to the mortality for those who wait less than 4hr. In fact the mortality rises linearly for every extra hour waited beyond 4hr. For every 191 waits between 4 and 6 hr there is one extra death; for every 72 waits of 8-12hr there is an extra death. There were not enough >12hr waits to get a good estimate of the mortality there but, given the strong trend of higher mortality with longer waits, it is reasonable to conclude that it is higher for waits longer than 12hr. Obviously there are some error bars worth adding, but given the base data includes 5m individual patient records, there is a lot less uncertainly than you might think.


This chart from the paper summarises the relationship (SMR is the standardised mortality ratio):



The basic conclusion is that long waits before admission are associated with higher death rates even after considering patient morbidity. Since it isn’t an RCT, careful statisticians won’t claim they can prove causality, but this is a big study done carefully that comes as close to estimating causality as it is possible to get. It might technically be an association, but there are big flashing red lights hinting that the effect is real, significant and causal.


One other thing worth noting is that the study was not funded. NHSE didn’t pay, nor did any other body or think tank. Everyone involved gave their time freely because they recognised the importance of getting hard evidence. 


All the subsequent estimates by the RCEM and others are based on the mortality rates observed in the EMJ study updated to reflect the number of long waits in later years.


How does the RCEM turn numbers of long waits into estimates of excess deaths?

Most of the estimates of current excess deaths apply the results of a simpler grouping of waiting times and mortality in the EMJ paper to current counts of waiting times in A&Es.


For convenience the paper calculated NNH (number needed to harm) for 3 different groups of waiting times: 4-6 hr (191); 6-8hr (82); and 8-12hr (72). What the NNH means is that, for example, there is one extra death for every 191 patients waiting between 4 and 6 hours.


These can directly estimate excess deaths from the known numbers of patients waiting in those time bands. Assuming, of course, that the mortality rates have stayed similar to the rates in the period under study.


But the A&E statistics that are normally published don’t count the number of waits in those bands. Total waits >4hr is routinely published (that is the definition of the A&E target). But, due partially to a media furore in 2016 triggered by the increasing number of anecdotes about 12hr waits, NHS Digital did start publishing annual totals of 12hr waits. 


The following chart shows those totals (the red line is annual count of 12 hr waits) in the context of total major A&E attendance):





For context, in case this was not clear, in 2022/23 about 11% of all arrivals waited more than 12hr to leave the A&E. The target is for fewer than 5% to wait more than 4hr.


NHSE resisted publishing more details of 12hr waits for a long time. They didn’t relent until last year when monthly 12hr totals were included in the monthly performance numbers.


Knowing the annual total 12hr waits gives at least some basis for starting to estimate excess deaths from long waits. And the initial RCEM estimates were based on applying the EMJ mortality rates to the annual 12hr totals. 


That’s what the RCEM used. They assumed–as most experts assumed–that most 12hr waits were for admitted patients so the EMJ NNH number for 8-12hr waits could be used as a conservative estimate of the excess deaths for 12hr waits. Since the mortality estimated by the EMJ work increases every hour waited, this should give a conservative estimate of mortality rate for the group waiting >12hr. Their 2023 estimate was that between 300 and 500 extra deaths occurred every week from long waits.


Independent actuaries and statisticians have cast their expert eyes over these numbers and found them plausible. This Full Fact analysis from January 2023 has a good summary of their opinions of the original claim.


In 2024 they FOI’d the system for better data. It turned out that the assumption that most 12hr waits were for admitted patients were false, about 30% are discharged after their 12hr wait. The EMJ didn’t estimate the mortality for discharged patients so they excluded them to get a more reliable excess death estimate for the group waiting >12hr for admission. This still left a shocking but slightly lower estimate of an average of 250 deaths per week caused by long waits.


But note the conservatism of this estimate. It applies a mortality rate for the 8-12hr wait group to the >12hr wait group even though there is good reason to think it should be higher mortality for those longer waits. And it ignores any mortality for discharged patients not because there isn't likely to be any but because the EMJ paper didn’t estimate it. 


A similar FOI done about the same time as the RCEM one shed some further light on this and allows a different estimate. This is from an FOI by The Independent:


We can see in this the total attends based in different waiting times but also classified by whether the patient was admitted or not. The RCEM were right, only ⅔ of 12hr waits are admitted. But we also have the number of waits longer than 4hr and under 12hr (where slightly less than 30% are admitted). But this allows an additional excess deaths estimate based on the below 12hr waits. Even if we take the lowest mortality band from the EMJ study (NNH is 191 for waits between 4 and 6 hr) this suggests an extra 150 deaths per week. 


An additional comment is worth making. It is downright astonishing that so many patients wait 12hr only to be discharged. This alone should be a major indicator of an appalling level of dysfunction in our A&E departments. 




The implications of the numbers and the NHSE response

If hundreds of deaths a week are occurring because patients are waiting too long to leave A&E that is surely one of the most significant and important problems for the NHS.


But the leadership in NHSE “don’t recognise the numbers”. And claim that it is the job of the ONS to calculate excess deaths. NHSE said this to full fact (see the above link, highlights are mine):


“When asked on the BBC if he accepted that A&E delays have caused deaths, Professor Stephen Powis, National Medical Director of NHSE, said “it’s not unusual to see high levels of excess deaths in the winter”.

 

When pushed to give an NHSE estimate of deaths due to delays in A&E he said it is “very difficult to say'” but that it was “not for us at [NHSE] to produce those figures, [it’s] for the ONS and others to look into”.

 

However an ONS spokesperson told us: “We are not able to produce any analysis on deaths that are due to A&E delays. Our statistics are based on death registrations, so we analyse deaths (excess deaths in this case) based on information collected on the cause of death from the death registration.”


NHSE’s Chief Strategy Officer Chris Hopson also previously told the Today programme “a full and detailed look at the evidence…is now under way”, but we don’t have any further details of that work, or even know who is doing it.


In the press release to the march 2024 revision of the RCEM claims those responses were broadly repeated. 


But the statement by the ONS undermines the diversionary claim by NHSE that “excess deaths” is what the ONS do. But the ONS excess deaths analysis is unrelated to the EMJ analysis. The specific calculation of the relationship between waiting times and mortality requires NHS data the ONS doesn’t routinely analyse. But the same claim that this was the ONS’s job was repeated 15 months after the ONS denied it. Chris Hopson’s claim that “a full and detailed look at the evidence…is now under way would be a welcome development but no evidence has emerged in 15 months that this is happening.


This is particularly frustrating as NHSE’s own analysts are the only people who have access to all the data to repeat the EMJ analysis. If a competent analyst were asked to take a quick look at the data, they would have a quick approximate estimate of the credibility of the EMJ analysis within a week. Better than that, they could extend the analysis to include discharged patients which the EMJ analysis ignored. And they could use all the data since 2018 and update the estimates to test whether the problem was getting better or worse over time.


If the EMJ analysis lacks credibility or is downright wrong, NHSE could show why quickly by repeating the analysis themselves. There are several possible reasons why they have not done this. One is that the leadership doesn't understand just how easy it would be for their own analysts to do it. That is disturbingly plausible. But they could call any of the EMJ authors and ask. But, as far as I know, none of them have been contacted by NHSE. Another is that they are showing wilful blindness to the severity of the crisis in A&E. The worst explanation is that they have looked at the evidence and things are even worse than the EMJ estimated and they really don’t want to admit that.


The important issue is that while some other organisations could repeat the EMJ analysis (though more slowly and with older data) NHSE are the only organisation who could do a thorough job on up to date data. Despite a promise to “look into” the evidence made in january 2023, there is no evidence this has been done.


The importance of the results (statistics are a lot less compelling than single patient anecdotes)

The death of one man is a tragedy. The death of millions is a statistic. (falsely attributed to Stalin, actually a paraphrase of earlier work by Kurt Tucholsky).


The influence of media stories about bad things happening in the NHS is dominated by personal anecdotes. They work well in headlines and writing because they provide that personal link that strokes the strings of empathy. The handful of deaths caused by nurse Ruth Letby are given outsized impact because the media can report the personal stories from the families and staff. Even the scandal of Mid Staffordshire (potential deaths caused by poor practice estimated anywhere between hardly any and a thousand) are far more salient in the public mind because of the personal stories from some of the victims and their families.


But this distorts the perception of where big problems are. There are no such stories about the hundreds of excess deaths every week in A&E. At most we get stories about how awful it is to be stuck on a trolley for 12 hours. But we can’t identify the individuals who died early because of long waits as the weekly totals are merely statistics and it is impossible to separate the 8% of admissions who would have died with a 4hr wait from the extra 2% who died because of a long wait. 


The huge scale of the problem is a statistic and the media don’t treat it as a tragedy.


So, a lack of compelling personal anecdotes leaves public discussion of NHS problems deeply unbalanced. The NHSE leadership can’t use this as an excuse. They have a duty to understand which problems are biggest and the measure for that is the statistics not the anecdotes or the number of bad news stories in the media. If they don’t recognise the scale of the problem, they won’t devote the right amount of focussed effort to fix it.


Even conservative estimates of the excess deaths associated with long waits have them at 20k per year. That’s way more than the total number of deaths estimated from the scandalous NHS contaminated blood scandal. It is the same scale as the total estimated deaths from heart attacks caused by Merck’s Vioxx (rofecoxib) painkiller which forced them to withdraw the widely used drug.


But NHSE continues to deny the statistics. And, while the media in general have discussed it, it has not received anything like the emphasis as the stories containing personal anecdotes.


What should NHSE do?

To me there are a handful of key actions that are necessary:

  1. Immediately stop trying to deflect from the issue with weak excuses or spurious arguments.

  2. Repeat the EMJ study using the more recent data that NHSE have unique access to. Do it for recent data and for the 7 or so years of old data that would also cover the initial EMJ study. Also assess whether discharged patients see elevated mortality.

  3. Be open with the results so independent experts can either refute the EMJ claims or refine the claims. 

  4. If the EMJ results hold up, immediately rethink the priorities for where action is most urgently needed to improve the NHS and adopt a much tighter focus until the biggest problem is fixed.


According to an old Mark Twain pun, Denial isn’t just a river in Egypt. The NHS can’t afford an NHSE that is taking a whole riverboat cruise there.