The controversial NICE guidance on staffing in A&E isn't worth arguing about. The evidence base is almost nonexistent and disagrees with better, older analysis. The NHS should ignore its recommendations and focus on gathering better evidence.
When the HSJ prompted the release of the NICE guidance about safe staffing in A&E I thought we might see something interesting. Then I read it and changed my mind. If anything the analysis sets back our understanding of how to run a safe A&E department. In fact it stands as a case study in how not to do useful analysis of an important operational issue for the NHS. Here is why I reached that conclusion.
What NICE did and didn't do
The NICE guidance is based on three sources of evidence: expert judgement; a literature survey; and an economic modelling study. The documents describing these sources of evidence are now available either from the NICE website (the economic model) or the HSJ.
What NICE didn't do is to gather systematic evidence from actual A&E departments in the UK either about staffing or performance (the model used limited evidence from a handful of departments and supplemented this with some average performance evidence from SITREPS and HES data).
What's wrong with the evidence
The NICE review itself sums up some of what is wrong with the literature evidence. Two problems stand out from their own summary: almost none of the evidence relates to the UK and there is very little high-quality evidence to start with.
In addition to this their evidence specifically excluded evidence relating to certain important practices that are common in English A&E departments. A critical example is the exclusion of evidence relating to Emergency Nurse Practitioners (ENPs) and related specialists. This seems to have been a choice so that the recommendations could be focussed on the general level of nurse staffing.
NICE commissioned a simulation model to help clarify some of the relationships that were simply missing from the actual literature evidence. This simulation forms the only significant basis for the actual recommendations (the literature evidence is simply too flimsy and contradictory to support any solid recommendations).
The trouble is that the simulation model is itself deeply flawed. So flawed that it is hard to take its recommendations seriously. It makes assumptions that were known to be naive a decade ago, some that directly contradict common practices in most actual A&E departments and produces results which disagree with actual observations about both staffing and performance. These flaws deserve a whole section to themselves.
Simulation modelling is just a way to hide the link between bad assumptions and your recommendations
[Actually I don't really mean that. Simulation modelling is an effective tool in the right hands and when the right assumptions are made. When a system is well understood but its performance is not it can provide valuable insight into how to improve. But the NICE model shows a failure to understand how A&E works and, therefore, cannot say anything useful about performance.]
The model used by NICE embeds false assumptions about how A&E operates. This is a critical failure in such an important model but NICE didn't pay much for it so perhaps that's all we can expect. Whatever the reason the assumptions are sufficiently bad that the output of the model can tell us nothing useful about staffing in a real A&E department.
Here are three examples of where the model makes really unrealistic assumptions and fails to represent the reality in A&E.
The model assumes a single process for treating patients.
This means that the model assumes that patients with single minor conditions are treated in the same way and by the same people as patients with complex or multiple problems or injuries. Real A&Es don't do that. One of the major innovations that led to much faster A&E treatment times in the early 2000s was the introduction of streaming for different types of patient. The idea of "see and treat" for minors was a major innovation that recognised that many patients don't need multiple investigations or multi-skilled teams to treat them. So many A&Es designed much simpler processes which cut out multiple stages of assessment, investigation or treatment. The process is staffed by people fully qualified to both assess and treat minor injuries or conditions. Patients get assessed and, if they don't have anything complex wrong with them, they get treated immediately, often by a specially qualified nurse. This is fast and efficient. It reduces the number of staff required (by eliminating unnecessary steps in the process for the majority of patients) and speeds the treatment. It leaves far fewer patients waiting around and clogging up the waiting room, thereby reducing crowding (which is good for staff and other patients).
By ignoring this major innovation, the NICE model becomes a hypothetical model of how an A&E department might operate if nobody ever had any good ideas about how to organise one effectively. By modelling something which mostly doesn't exist the model tells us nothing useful about staffing or performance in real A&E departments in England.
The model assumes the patients with more severe injuries go to the front of the queue
This sounds reasonable. But coupled with the previous assumption that all patients get treated in the same process it turns out to be both unrealistic and bad for patients.
It is unrealistic because it generates the output where the majors get treated faster than the minors (that what it assumes should be the process so, of course, that is the output the model generates). This is the opposite of what the data actually shows. In reality the majors--especially the ones who need to be admitted--have the longest waiting times. In well functioning departments the majority of minors are treated in less than 90 minutes, but it isn't uncommon for patients requiring admission to have an average waiting time of 4hrs or more. Moreover there is good evidence about why they wait and it isn't, mostly, caused inside the A&E department at all but by the failure of most hospitals to manage their beds effectively (see this Monitor report on the causes of A&E delays). The model doesn't consider these delays at all.
Streaming of patients into separate processes was developed because a single process is bad for all patients when there is a mix of different patient needs. A single process is wasteful; it creates unnecessary delays for minors; and it uses more staff time for no benefit at all to the majority of patients. Streaming minors into a separate efficient process frees up staff time for the more complex needs of majors and allows the separate process of treating them to operate quickly without interfering with the process for treating minors. Having two processes achieves the result of rapid initial treatment for majors without having to bump the minors to the back of the queue.
By ignoring streaming and modelling a different treatment process that no longer exists, the model fails to address anything useful in real-world A&E staffing or performance.
The model assumes that staff are all much the same
The focus of the model is to understand whether nurse staffing affects performance so it assumes that there are few differences among nursing grade staff and ignores issues with doctor staffing. Again the assumptions made ignore the reality of how A&E departments work.
There are two things that are well known by A&E experts that relate staffing to performance. One is that, when you stream minors to a "see and treat" process, you can use experienced nurses to deliver a lot of the treatment. These specialist staff (called advanced or Emergency Nurse Practitioners--ANPs or ENPs) are dedicated to the stream dealing with minors and allow fast treatment to be delivered efficiently for patients who don't have complex problems. Both the NICE model and the evidence review explicitly exclude anything relating to these specialists. The other staffing issue is that senior medics "on the shop floor" improve performance everywhere, probably because they can make fast confident decisions for edge-case patients where more junior staff would dither or make poor judgement calls. This is also ignored in the NICE evidence and model.
In summary: modelling the wrong thing won't provide any useful insights
I could go on but I won't. The key point here is that if you develop a model that isn't based on the real world you won't get any useful insights about the real world. A Lego model of the Empire State Building won't tell you about the structural integrity of the real Empire State Building. If the engineers used a Lego model for this purpose you would be well advised to stay out of New York.
So NICE have created a model that is uninformed by real world observations about how A&E actually operates; it ignores observations about real A&E departments are staffed; it doesn't have any inputs about how they actually perform; it ignores observations about where problems exist and models a process that doesn't consider the biggest problem (finding beds). Why does anyone think its conclusions are useful?
What NICE should have done
Given the admitted lack of evidence about real A&E departments in England what NICE should have done is to look for useful evidence rather than waste time on summarising poor quality analysis of irrelevant systems in other countries. There are more than 150 major A&Es in England and their performance is measured both in public SITREP data and in less public but more detailed HES data. Most of these departments should have some idea of their staffing profiles and rosters. Putting those two sets of observations together would allow a rich set of "experiments" to be done by comparing the departments to each other. It might take more effort (and actual statistical skill as opposed to modeling or literature review skill). But the results would tell us about the system we actually have.
NICE did none of this.
What is worse, the exercise has been done before and nobody at NICE, it seems, noticed.
When the Audit Commission existed and still did some work on hospital performance they had a programme called the Acute Hospital Portfolio Review. When the programme reviewed A&E it looked at staffing and performance on a range of clinical metrics including speed but also including quality of care. In other words, they did exactly what NICE didn't. The last of their reports that I know of is preserved here (pdf download).
The reports reached some startling and unexpected conclusions about A&E staffing which were credible because they were based on extensive real evidence on actual English A&E departments not on models or academic speculation. Here are two conclusions (with my emphasis):
Common sense would suggest that a large part of the improvements in times spent in A&E departments since 2000 has been due to the increases in staff. However, when comparisons are made at individual department level, there is no association between relative increases in staff and improvement in times spent in A&E.…
for comparability, staffing levels need to be expressed as a ratio between actual staff numbers and the numbers of annual attendances (a reasonable measure of the size of a department). When expressed in this way, there is no relationship between times spent in A&E and staffing levels. Tightly staffed departments perform as well as generously staffed departments. This is consistent with the findings in the 2000 review.
Staffing in A&E has improved significantly since the last of these reports was written.
I suspect that the detailed data behind these conclusions has been lost with the abolition of the Audit Commision and its successor. But I know that the evidence was comprehensive and solid over several successive periods of data collection.
If you want to produce guidance about staffing that disagrees with their surprising conclusions then you need to generate some better evidence. Nothing in the NICE recommendations does that.
We also have other recent analysis that shows the biggest problem in A&E performance is nothing to do with A&E staffing but is about coordinating the A&E demand for beds with the flow through the beds in the rest of the hospital. No amount of extra staffing in A&E will help that. So not only is the evidence behind the NICE staffing recommendations as weak as wet toilet roll, it completely fails to address the biggest actual problem in our A&Es.
The controversy over the non-publication of the work has given it a credibility it doesn't deserve. The right response would have been to publish it and ignore it as it has nothing credible to say.
This comment has been removed by the author.
ReplyDelete