Long waits in A&E kill patients. A new analysis of mortality and A&E waits by the ONS–despite issues in the analysis and presentation of the results–makes this look like an even bigger problem than previous analyses.
In January 2025 the Office of National Statistics (ONS) released a new analysis of the relationship between A&E waiting times and mortality.
This is an important study because understanding when NHS performance is killing patients unnecessarily is a major indicator of where the system’s biggest and most important problems are.
But the results will be more contended and confusing than they needed to be because the ONS have presented them badly and have omitted some key data that make the importance of the results harder to judge and harder to compare with previous analysis.
This note is an attempt to explain the significance of the ONS results while also suggesting some of the improvements that could be made to make the results more useful.
The background to this analysis
The ONS are not the first to attempt to estimate the excess mortality caused by long waits. A previous analysis (of which I was a co-author) was published in 2022 in the Emergency Medical Journal and also used NHS patient-level data to derive reliable estimates of the relationship between long waits and mortality.
This work was partly inspired by a previous Canadian study which also concluded that longer waits substantially increase mortality but with less reliable data on length of wait (the UK studies use data that contains the wait for individual patients).
The EMJ study, which has been extensively used in campaigns by the Royal College of Emergency Medicine (RCEM) to highlight the apocalyptic state of English A&E departments, used comprehensive data from april 2016 to march 2018 but only for admitted patients. Conservative estimates based on this study suggest that long waits cause between 10,000 and 20,000 extra deaths every year. The EMJ study did not estimate mortality for waits longer than 12hr due to the small numbers (which were below 2% of attendance in the period; they are over 10% now). The RCEM derived the excess deaths estimates using total published numbers for 12hr waits and the EMJ estimate of mortality for shorter waits of 8-12hr. Plus, recent estimates by the RCEM of excess deaths number were smaller than their original estimates as the EMJ data only covered admitted patients but recent NHS data says about one third of 12hr waits were discharged (an astounding statistic by itself) and the EMJ analysis did not estimate mortality for discharged patients.
It is notable that NHSE leadership’s response to the original publication was to dismiss the results. It is well worth reading the evidence session to the House of Commons Health Committee where Adrian Boyle of the RCEM presented the case and NHSE leaders dismissed it.
One argument too easily used to dismiss the importance of waiting times is that many other factors also influence mortality and some of them also increase waiting times, making describing the part of the excess mortality attributable just to long waits complex (though the EMJ paper went to great length to do adjustments and still concluded that waits were a big factor.
When rumours emerged that the ONS were doing an updated version of the analysis, there was some hope that it might swing the debate so the NHS would pay more attention to the problem. But the way the ONS chose to present their results blunted some of their possible impact as we shall see.
What the ONS did
The ONS created a cohort to analyse based on data from three datasets: the 2021 census (for demographic information); the ONS death registration data; and the complete NHS patient level data about all attendances to major (type 1) A&E departments for the financial year ending in march 2022.
The link to census data allows adjustments to expected death rates based on factors recorded in the census. The death registration data allows actual death rates to be analysed (the specific mortality metric is deaths within 30 days of hospital discharge). The A&E attendance data allows the analysis to cover all A&E attendances in the data.
One important fact to note (in principle) is that this analysis is not based on sample data but on actual data. The total number of deaths is not an estimate but a count of actual deaths (so removing a major potential source of statistical uncertainty). This is also true of the EMJ analysis. There can be minor data quality issues because of poor data recording. For example, not all the patients in the A&E data can be matched to the other datasets (but this misses fewer than 5% of the total so should not be a big issue).
In short this should be a very high quality dataset leading to very reliable results.
In presenting the evidence the ONS chose to mostly present the adjusted data (so the mortality differences are adjusted to take account of multiple factors other than waiting times that also influence mortality). The EMJ paper also did this adjustment but using a completely different method.
Only one dataset in the ONS release does not adjust mortality for other factors. But they did not describe their adjustments in detail or how many patients were omitted from the final cohort totals (this will be a big issue as described later). This lack of detail does not mean their results are not notable or important but it does create some opportunities to cast doubt on the conclusions (many of which will be unfair or downright wrong but the ONS could have avoided the potential for criticism if they had provided more detail).
What were the key results?
I’m going to go through the key results and present many of them as charts which are a lot easier to understand than the raw tables released by the ONS. In the next section I will describe some of the issues that could have been avoided if the ONS had released additional data they must have to have been able to derive the analysis they have done.
The first chart here presents the raw data in the cohort they used for analysis:
The bar chart shows the total number of patients who waited different amounts of time to leave the A&E. The data counts the total attendances and the total 30-day deaths for each waiting time. The chart shows the raw analysis as the proportion of people in each waiting time group who died. Below 4hr the raw mortality rate is <0.5% for all arrivals. By the time waits are 12hr long that number is about 5%. That’s a big increase but hard to interpret because, for example, perhaps the cohort waiting over 12hr contains far more old people who are far more likely to die. Other analyses have adjusted for many such effects.
The total number of patients covered in the chart is about 6.7m.
This, as will be discussed later, is less than half the number recorded as attending major A&Es in the same time period which was about 16.1m. Where did the missing patients go? The ONS don’t explain. So, is the cohort representative of the attendance? Probably. This table shows the stats on grouped waiting times for the ONS cohort:
So the reported public waiting time statistics for this year are close to the ONS cohort despite the cohort being less than half the size of the total attends.
It is also worth noting that the reported proportion of wais over 12hr is close to the official annual reported number (close to 5.8%). This proportion has more than doubled since the year this analysis was done (december 2024, for example, had more than 10% of all attendance waiting more than 12hr.
All the other tables reported by the ONS don’t cite raw numbers but, rather, odds ratios after extensive adjustments to account for confounders. The details are not given (which may cause some complaints).
In most cases the odds ratio describes the probability of mortality for a particular waiting time group relative to the chosen comparison waiting time (which, I think, means the group labelled 2hr). The waiting groups, technically, mean all the waits that round down to the number. So the group labelled 2hr means all patients waiting between 2hr and 2hr 59mins.
The important message which stands out–whatever the method–is that long waits are bad for mortality in every subgroup even after extensive adjustment for other factors. Sometimes the effect of long waits is very bad. This is a stronger result than the EMJ analysis.
So what do those results look like?
This is the analysis by admission status:
This shows the different effect of long waits on mortality for admitted patients and discharged patients. Mortality for admitted patients clearly rises with longer waits and rises by about 30-40% for waits of 8-12hr (not grossly different from the analysis in the EMJ paper).
The mortality rate rises far faster for discharged patients, with the rate nearly doubled for 8hr waits and tripled for 12hr waits. This is important as previous estimates of excess deaths ignored mortality in discharged patients.
But we can’t judge from this data whether mortality is worse for discharged patients because the base mortality isn’t shown for either group (this observation applies to all the odds-ratio data presented by the ONS and is a big issue when trying to judge the importance of some of the results). We know that admitted patients are perhaps 10 times more likely to die than discharged patients so a small increase in their mortality means more deaths than a similar increase in the mortality of discharged patients.
The message would be far stronger if the ONS released the additional data they must have for the base mortality rates and the number of patients in each wait group (then we could calculate the total excess deaths easily as has been done with the EMJ analysis).
But I don’t want to undermine the message that still stands out in this analysis: long waits are bad for patient mortality even when you adjust for all the possible confounding factors.
The ONS also analysed the effect on the mortality in different age groups:
Again, the mortality mostly rises with waits over 4hr, sometimes by a lot. Again it is hard to judge how many deaths this adds up to because we don’t know the base rates for any group. And there is the strange pattern for waits for the young (I think the band labelled “20” means anyone under 20). But this might be a product of having very few long waits in that cohort. Also, children’s A&Es have far better waiting performance than others.
This possible explanation is reinforced by showing the age odds ratios with the statistical confidence intervals:
Note that in this chart the scales for each cohort are different to accommodate the large 95% confidence intervals and very different odds ratios for some groups. Those intervals are strongly driven by the sample size so very wide intervals imply a small and potentially unreliable sample.
The ONS also analysed the odds ratios by the primary complaint at arrival. This is that chart:
There are some odd anomalies here, mostly in groups which probably have small sample sizes.
The results are clearer if we stick to a single time cohort and compare the odds ratios for each condition.
The highest risk increases are in the groups with the widest error bars so might be unreliable (again if the ONS gave us the raw cohort sizes we could make a better judgement).
But the key message remains unchanged. At 12hr the mortality risk is between 50% and 400% higher than at 2hr even for conditions where the confidence intervals are tight.
Problems with the ONS data
While the key message of the analyses are clear, there are problems in how the ONS have chosen to present the results and one major potential issue with the data. Both might be used to undermine the results but are also easy for the ONS to fix without doing any more analysis.
The biggest issue is the size of the total cohort. The ONS claims to have a near-comprehensive dataset of people attending A&E. They claim to have omitted some data but hint that this didn’t cause large numbers of omissions. But their complete cohort only has 6.7m patients when about 16.1m attended A&E in the period. That’s a big gap.
It may not matter as the 6.7 m seems to be fairly representative of all attendances. But the failure to report where the missing records went is annoying. It might even be an error caused by their unfamiliarity with A&E data. Good A&E analysts will have approximate numbers of total attendance in their heads and will instantly wonder, as I did, why the sample is only 6.7m big. It is possible that the ONS accidentally omitted a big chunk of their data and nobody noticed the gap and, therefore, didn’t see that there was anything to explain. They claim to have collaborated with the RCEM, DHSC and NHSE but someone there should have noticed this gap (though, cynically, I might question the motivation of NHSE to correct errors given their track record in downright denying the EMJ analysis).
The other problems with the analysis as presented is that it omits the information needed to translate the analysis into simpler, starker counts of the number of excess deaths (a number that seems to have been very effective in getting journalistic and public attention).
This could be easily fixed without further analysis. The ONS could simply provide the actual base rates and cohort sizes for each analysis. As the analysis currently stands we don’t know, for example, the number of 80 year olds in the sample or the proportion of the attenders who were admitted.
So we can tell that the rate of death rises with longer waits but not the number of deaths that leads to.
Conclusion
The key message should not be ignored. Long waits kill patients.
This is particularly important given that the number of long waits has risen rapidly and is still rising. The annual total waiting more than 12hr is ten times longer than when the EMJ analysis was done and has more than doubled since the time covered in the ONS analysis.
But A&E performance is not a current top NHSE priority. And what is being set as the performance target by NHSE is based on an unambitious goal for 4hr performance. Some of us and the RCEM have suggested that A&E performance should be as important, if not more important, an improvement goal as elective waits. And that the first target for A&E performance should be to eliminate 12hr waits not to make minor improvements in 4hr waits.
I’m sure that the response to these results will contain a phrase something like “long A&E waits are completely unacceptable” perhaps accompanied by “everyone is trying extremely hard to improve A&E performance”. This is the stock answer when the consequences of A&E crowding hit the headlines. But, as Yoda said in Star Wars: ”There is no try: there is only do or not do”. Right now there is a lot of trying but not much doing.
No comments:
Post a Comment