Pages

Tuesday, 13 November 2018

You can't make the NHS better by optimising its components.

In a system with many interdependent parts trying to optimise the parts separately doesn't optimise he whole. Local optimisation doesn't lead to system optimisation. This is a lesson NHS management needs to learn in many areas from how emergency care is managed to how the costs of diabetes are minimised.

There is an old (possibly apocryphal) story about the perils of central planning. Stalin issues a demand that factories improve their productivity by producing more output for the same number of hours worked. Some clever factory manager realises that the switch from producing left-footed shoes to right-footed shoes wastes time so he mandates that the factory only produces left handed shoes. Output of shoes rises significantly and he makes his productivity target. But, of course, this is terrible for the people as one left shoe is useless by itself (unless you are a one-legged war veteran who lost his right leg and there are few of those not least because war injuries don't discriminate which leg is blown off).

If your local metrics are wrong, factory productivity is not a good indicator of system productivity.

But this sort of naive focus on local metrics is, even now, a big problem in the NHS (which also suffers many of the other problems inherent to centrally planned systems).

The NHS is short of managers and is particularly short of skilled managers. The system sometimes seems to hate them not least because many politicians seem to regard them as parasites who suck resources away from the heroic front-line staff (even Sumproduct Phil's newfound largesse came with the warning that the extra cash should go to the frontline not the "bureaucrats"). But managers are necessary in any system not least because a poorly organised and coordinated system will function badly however many "front-line" staff it has.

One particular failing of management-lite systems is that there is nobody to do the system-level thinking that makes that coordination work. So, many management decisions are divided up into smaller decisions that can be made locally with no attempt to consider the system-level consequences. This is one factor leading to poor system productivity. The drive to improve system productivity is reduced to a set of local initiatives to drive up local productivity and, like the shoe factory, this doesn't achieve its intended goal.

Optimising A&E doesn't fix the A&E performance problem
Take, for example, the drive to improve A&E performance. It is all too common for this to be seen as a problem for the A&E department. So local managers devise local initiatives to improve staffing, reorganise flow, divert patients, develop clever ways of dodging the 4hr metric and so on. But these don't work. So leaders put more pressure on staff to work harder and do better. But the staff are demoralised from all the previous initiatives and become burnt out, increasing turnover and continuity. The initiatives repeatedly fail; morale and engagement fall. More pressure is exerted and the downward spiral continues.

I've ranted about why this happens plenty of times. But the key point here is that poor A&E performance isn't (mostly) an A&E problem. It is a system problem. Much of the problem is a failure of flow through beds (which are not controlled by the A&E department but by the specialites running wards). In turn, some of their problem is caused because the hospital is not in control of the systems in the community that can get patients the appropriate community care they need.

This problem needs joined up thinking to create any hope of a solution. Trying to fix it by putting more and more pressure on the A&E department is futile and, if anything, makes the overall problem worse.

Local optimisation doesn't lead to system optimisation.

Minimising the cost of blood-glucose testing doesn't minimise the cost of diabetes
In another example recently I heard of some CCG attempting to use RightCare metrics for the cost of diabetes blood-sugar tests to drive lower spending. Now there isn't anything wrong with trying to use the cheapest effective technology as this frees up money to use elsewhere for other treatments. All other things being equal, CCGs should aim to use the cheapest technology that does a good job. But all other things are not equal, and some of those other things matter a lot.

The problem here is that diabetes is a complicated area and what you do with testing affects the need for treatment elsewhere. The background is that diabetics with good blood sugar control have far fewer complications in the future. But it is also important to note that most diabetics do not test their blood-glucose often enough to achieve good control, partially because pricking your fingers 10 times a day in inconvenient and painful. We just don't prescribe enough blood-glucose test strips for all insulin using diabetics to test as often as they should. There is a reasonable case for saying CCGs should encourage more testing (or new technology like the Freestyle Libre continuous glucose monitor which, in effect, allows 24hr continuous testing for the same price as the recommended levels of finger prick tests).

But the easiest way to control the cost of glucose testing is the limit the number of test strips issued to that CCG's population. That is picking the wrong metric for the wrong local optimisation. Sure, if you limit the number of test strips issued you will look good on the spending metric compared to other CCGs. But your diabetics will do fewer tests, will have worse glucose control and will end up with more diabetes complications.

And this is really, really bad for the system as a whole. To see why look at the overall costs of diabetes. A recent estimate puts the cost of diabetes to the NHS at around £10bn/year. Drugs alone are only about 10% of this, costing a smidgen under £1bn in 2017 in England. Blood-glucose monitoring costs <£200m out of that total. Most of the rest (certainly 75% of it) is sent dealing with the complications of diabetes (in the long term many amputations, many cases of blindness for example, are caused by diabetes but, even in the short term, poor glucose control leads to many hospital admissions for high or low blood sugar which can be life threatening if not treated promptly).

So trying to limit the testing spend (the <£200m) might be good if considered in isolation. But it doesn't look so good if it involves any risk at all of increasing the multiple billions spent on complications. Which it does.

So far I don't know many CCGs trying to limit the spend this way. But most of them are guilty of making a similar sort of choice when it comes to new technology for testing blood glucose. Abbott's Freestyle Libre is a wearable monitor that tests blood glucose every few minutes to give a complete 24hr profile that provides the sort of insight that enables diabetics to achieve much better control. Libre would cost about £900/yr if CCGs made it widely available. This is about the same cost as conventional testing for diabetics who test 10 times/day (which is what they need to do to get good control as NICE advises). But most diabetics don't test that much so moving to Libre would cost more and CCGs are resisting that switch (and inventing incoherent clinical reasons to justify that stance). None of the CCG documents justifying this stance even mention the other costs of diabetes or how they could be improved by more blood-glucose testing leading to better control.

Their local optimisation of the cost of glucose testing is a catastrophe for the total cost of treating diabetes across the whole NHS. Even a modest improvement in average blood-glucose control would yield a huge gain in the cost of complications. This will never happen if all CCGs consider is the local cost of testing.

In a complex system like the NHS local optimisation is dumb
The point uniting these two, very different, examples is they both involve local optimisation and a failure to think how one part of the NHS is connected to other parts. Trying to fix the whole NHS by telling its parts to maximise their productivity or minimise their costs doesn't work.

Every part of the NHS needs to understand how it fits into the system and how it interacts with the other parts. And everyone's goal should be to make the system work better not just their little, local part of it. Productivity in the NHS won't improve if we don't

Friday, 3 August 2018

Asking the wrong question about GP behaviour is even worse than getting the wrong answer about it

A recent survey based on an NHS England idea suggested that 10-20% of GP appointments were avoidable. But the answer is useless as the wrong question was asked to the wrong people at the wrong point of the process. Worse, the very way the survey was framed was built on false assumptions about how GPs could work leaving the most important question unanswered: what would happen if GPs organised their work differently. It is astounding that such a bad survey was commissioned and has any influence over NHS policy.


That GPs are overloaded with demand and overworked appears to an almost unquestioned belief in the current NHS. So it should be important to understand what can be done about this. We need data. We need good analysis. We need better ideas about what to do.


So when I saw reports concluding that 20% of GP appointments were avoidable, I thought they might be the result of a careful analysis of what was going on.


But I was wary. Similar surveys of A&E attendances conclude that too many people go to A&E instead of other services. This observation is, however, useless as it fails to consider that these people are not the cause of poor A&E performance and we have no idea how to make the go anywhere else. Therefore useless for policy, unless wish fulfillment is now a major element of NHS planning.


Sadly, despite the amount of effort put into the GP survey by The Primary Care Foundation, the same is true of its results. In fact they might be worse.


As far as I can tell the key survey asked GPs at the end of a sample of appointments whether that appointment could have been avoidable. Nationally they thought that perhaps 20% could have been handled by someone else (by which they mean some mix of nurses, pharmacists or other staff). So far so good. The results might even be true.


But they have asked the wrong question to the wrong people at the wrong point in the process.


What if, instead of waiting for patients to get through the typically annoying process to get a 10 minute slot with their GP, they asked, instead, how many of the people granted an appointment actually needed an appointment to sort out their problem? By assuming that every patient interaction has to involve a 10 min appointment we have already made the strong assumption that 10 minute appointments are the only way GPs can respond to demand. And that demand can be mitigated–but only slightly–by using a different mix of staff in the practice combined with better signposting.


We have good evidence from a number of practices that changing the way GPs respond to demand can have a much bigger impact than this. Scores of practices have switched to different processes where the GP interacts with patients before booking appointments and only offers face to face appointments to those where the GP and patient agree it is required. These GPs typically find that 60-70% of demand can be handled without an appointment. In the practices that get this right the GP workload goes down substantially and patient satisfaction soars as they typically get fast on-demand responses to their problems and same-day appointments when they need them (rather than having to wait a week or two for the next available slot). See this tweet from GP Dave Triska, for example (he tweets his experience regularly and it is well worth checking out his feed).


The problem, these GPs have realised, is that the assumption that the only tool they have is a 10 minute appointment is false. There are plenty of other ways to respond to many patient requests and most of them are far more efficient that 10 minutes spent face to face. Sorting this out before spending 10 minutes in front of the patient saves a lot of time for both.


What the Primary Care Foundation should have done is to survey the incoming demand to GP practices and asked whether a face-to-face appointment was the best way to respond to that demand. By failing to do this they embedded the false assumption that 10 minute slots are the only tool in a GPs toolshed. This reinforces the false belief that there is no alternative and that the best we can do is to make minor adjustments inside the practice or, somehow, deflect the demand somewhere else.


The net result of this bad survey will be to blind GPs and policymakers to far better, more radical alternatives. That's really not the best way to get data telling us how to improve GP practice.

Tuesday, 3 July 2018

Knee-jerk ideology makes any sensible debate on the NHS virtually impossible

The NHS needs to improve. Most people agree with that. But when it comes to what specific policies or actions will deliver improvement there is far less agreement. So sensible debate about what to do would be useful. Sadly we are unlikely to ever have that debate when the response to any suggestion consists of a storm of ideological name calling.


A recent opinion piece on the BBC's Newsnight programme about the NHS has led to a storm of protest and abuse. The piece consisted of a short video by Kate Andrews of the IEA (a free market think tank whose funding is not very transparent). The response mostly consisted of arguments like this:
  1. The BBC should not provide a vehicle for right wing propaganda
  2. Kate Andrews doesn't understand the NHS because she is an American
  3. The IEA is an evil propagandist for an insurance based system and NHS privatisation
  4. The USA's health system is evil
  5. The BBC is biased
And more on the same lines.


Before I watched the video I assumed that she must have argued that the NHS needed to be financed by user charges and broken up into an american style mess run by the private sector for profit. Then I watched it. And my reaction was "what the holy fuck are those commentators talking about?"


The key arguments in the video are this:
  1. The NHS is in a state of perpetual crisis
  2. There is little appetite for radical reform
  3. Campaigners often pretend that there are only two alternatives: the NHS or the US "system"
  4. The NHS isn't on reasonable metrics "the envy of the world": outcomes are better in many other health systems
  5. The USA is a crap system but is also an extreme outlier and a meaningless comparison
  6. Many other countries (Australia, Singapore, The Netherlands, Germany, Sweden…) have universal healthcare with better outcomes than the UK
  7. Competition between a variety of providers (both profit and non-profit) is a common factor in other universal care systems
  8. Market reform in the NHS would be good
And that is it. No calls for abolishing "free at the point of use". No calls for privatisation. No praise at all for the US system.


The only contentious call is for more market reform in the NHS. The most left-wing pro-NHS campaigners agree with point 1. Point 2 is arguable but not controversial (unless you think that abolishing the Lansley act is radical which it isn't as returning the NHS to previous legal and structural model is a big change, but a conservative one). Point 3 is simply a summary of what the majority of commentators and campaigners seem to argue. Point 4 is solidly based on facts. Point 5 is also uncontroversial (though the fact that someone from the IEA has just said it ought to be interesting and pro-NHS campaigners might like to quote it a few more times in their arguments). 6 is also true but, annoyingly, rarely mentioned in the debate about what we should do to improve the NHS. Also 7, which probably explains why left-wing campaigners rarely mention 6 as it undermines the simplistic idea that provider competition is the root cause of everything wrong with the NHS. 8 is admittedly contentious and worth arguing about, though not for the reasons campaigners would normally use.


On that last point it is worth a quick diversion to see what good arguments against it would look like. The best argument isn't that competition doesn't work: it does, even in the NHS as respected and non-ideological health economists like Carol Propper have shown. But those studies showed that the benefits to quality were, likely, small (though the NHS's experiments with competition for elective procedures were not very radical). More importantly, more competition would do little to address the immediate short-term problems in the NHS even if they improved quality in the long term. We have more important things we need to tackle right now like gross underfunding, especially of capital and improvement projects.


So I can take Kate Andrew's argument and deal with it like an adult, specifically looking at what she argues for and rebutting it with logic. A disturbing amount of comment, however, looks less like a fight in a primary school playground and more like a shit throwing contest between two opposing tribes of agitated chimpanzees (seriously, read the twitter thread on the Newsnight post).


It is hard to judge whether any of the shit-throwing commentators even watched the video. I thought more might have latched onto her criticism of the US model ("even the IEA think healthcare in the USA is rubbish!"). What we actually got as a typical response was a series of ad-hominem attacks on Andrews and the IEA coupled with damning criticism of BBC bias for daring to allow such dangerous views to be broadcast (despite them clearly being labelled as an opinion piece to provoke debate and comment). Yes, the opaque funding of the IEA should make us suspicious of what they say and forensic in examining the facts and arguments they make, but it doesn't automatically render what they say as mendacious nonsense.


Much of the comment validates a point I have made before: it is impossible to discuss the NHS in ways that might help improve it because the "debate" is is conducted in clich├ęd shibboleths where all that matters is proving which side you are on. Identifying real problems and fixing them is utterly irrelevant.


If we really want to fix the NHS (and spend the government's new found largesse well) we need to dispassionately analyse what the real problems are and apply the new money to solving them. We need to look objectively at the facts and not just ignore the ones that don't entirely match whichever ideological agenda we have. The intemperate shit-throwing of the pro-NHS commentariat does not exactly encourage that debate. And the likelihood of finding good ways to make the NHS better is substantially lower as a result.

Tuesday, 6 February 2018

We should design A&E statistics to help improve the system not to make it look better than it really is

The desire to make A&E performance look good is in conflict with the need to improve it. The idea that we should report system performance (as David Nicholson argues) is a mistake. We should understand how the system actually works and measure the numbers that most help focus improvement effort.

The problem with the statistics on A&E performance goes far deeper than worries about fiddling the data to get better reported numbers. The real problem is the conflict between the political need to report the best performance and the management need to know what is happening so they can improve performance. The two are in serious conflict in a way the system has never fully acknowledged.

The political desire is report something, preferably something good. The management need is to report something useful for directing management attention to actions that will lead to improvement.

The conflict arises because essentially all the performance problems are in major A&E departments so reporting a combined number (including minor units) distracts attention from the units where performance is worst. For management purposes we might as well ignore all the stand-alone WICs and MIUs as they almost never see waits longer than 4hr. We should report their performance, but we should report it separately to avoid confusion about where the problem is.

The political argument is that emergency care is a system and we should report the overall performance of the system. But this is based on a fallacy. Originally, MIUs and WICs were intended to relieve "pressure" on major A&Es who would, as a result, show better performance. This failed for two reasons: one was that there was no evidence that they diverted a notable number of patients away from major A&Es; the other insight (only available after better data was collected from everyone) is that, even if they had succeeded in diverting patients from major departments, this would not have improved performance in the major A&Es (the divert-able patients are not the ones with long waits even in major A&Es).

What this means is that there are really two systems out there not one: major A&Es (type 1 departments) and units handling minor conditions (type 3s). They don't interact enough to be considered a single system. So they shouldn't be measured as a single system because that just creates confusion about where the problems are.

(There is a slight grey area where a type 3 unit is closely associated with a major A&E. In many of these all the minors are diverted at the front door to the type 3 unit. But this is exactly how a well functioning type 1 A&E should behave internally anyway (minors should be streamed to a simple and fast process) so it seems a little odd to separate the two for reporting purposes. This only applies when the units are co-located.)

The big conflict comes when hospitals are allowed to count remote WICs and MIUs in their headline performance. The only benefit of doing this is political: it makes the overall number look better. It causes immense confusion and inconsistency in headline performance, not least because some hospitals can do it and others can't making any performance comparisons unfair. Worse, some hospitals have wasted large amounts of management time trying to grab control of minor units so their headline numbers improve, a change which makes precisely no difference to any patient's experience but wastes a lot of management resource in pursuit of window-dressing not actual improvement.

It is unclear what the recent changes mean to all this. In the current reports it is normally possible to extract pure type 1 performance if you want to see it (though it isn't the headline number reported). If the new rules allow less transparent reporting of numbers from minor units they are clearly obfuscatory and bad for the system.

David Nicholson (the former head honcho of NHSE) doesn't agree. He claimed in this tweet that the 95% target was a system target, was was correctly set at 95% and would have been set much lower if it was intended to cover type 1 performance only. But he needs a history lesson. The original target was set at 98% and was only reduced to 95% when the incoming coalition government realised that abolishing the target completely would be a catastrophe. And there was no convincing clinical argument for the change (the original 98% was to allow for a small number of clinical exceptions–concussions, for example–where patients needed to be observed for more than 4hr before discharge). Yes, some medics thought the target was too strict, but they had no evidence-based clinical reasons for this: they believed, naively, that a more relaxed target would be "easier" to meet. They were wrong: before the target was changed the system was exceeding it; as soon as it was changed performance declined rapidly and frequently failed even to meet the new–supposedly easier–target.

It is also worth noting how the system performed when the original target was set. I extracted the chart below from my archives (I provided the internal weekly reports of national performance to the A&E team for several years after the target was first set: the chart comes from a public presentation I did; sadly I don't have access to the detailed data any more).







The system as a whole eventually met the 98% target for several years. And the type 1 performance often met the 98% until 2010. Moreover, the dilution effect of including the non-type 1 units was small when the type 1 performance was good. It is also worth noting that the decision to publish the system performance (including non-major A&Es) was challenged by analysts who knew that the focus needed to be on major A&E departments. The politically-minded civil servants disagreed and decided to publish the system performance but compromised by including the type 1 performance separately in internal reports to support more focussed improvement interventions.

While consistency is a good thing to aim for (this is the excuse for the changing rules) we seem to be aiming for a fake consistency. WICs and MIUS are not evenly distributed so consistent aggregation of numbers creates an inconsistent and unfair view of actual major A&E performance in different hospitals. By far the easier way to achieve real consistency would be to focus on major A&Es only and ignore the rest (or report their performance separately). That would be consistent and fair. But performance would look much worse.

The same is true for other, related, statistics on A&E performance. England also publishes a 12hr wait statistic, but this is incredibly misleading because it isn't an end-to-end metric. The clock starts when the hospital makes a decision to admit the patient, which many don't make until they know a bed is free meaning that the patient may already have waited 12hr before the clock even starts. Wales and Scotland start the clock when the patient arrives, which isn't gameable. This has confused the Prime Minister who incorrectly compared Welsh and English numbers during PMQs, drawing criticism from the chair of the UK Statistics authority, not least because there are about 100 times more end-to-end 12hr waits in England than the published number quoted by the PM suggested (as I explained here).

Real 12hr waits are actually measured in England just not published. NHS Digital now release them (but infrequently) and few NHS managers seem to know they exist or use them despite the important additional information they provide supplementing the reported 4hr performance. They are very embarrassing as they highlight just how badly A&E performance has deteriorated in the last 8 years. So embarrassing that I was once fired for impetuously talking about them in public (I shouldn't have done but I was naively trying to improve the quality of the debate about A&E performance using actual data).

To summarise. The real problem isn't fiddling the numbers: it is choosing the wrong numbers to look at in the first place. The need to improve performance is in conflict with the political need to report the best performance possible. We should report the numbers most useful for driving improvement (type 1 4hr and 12hr performance) not the ones that confuse us about where the problems are.



PS when private sector firms do this with their performance numbers it usually ends up in disaster. Enron and Carillion, for example, used clever ruses to make their public headline financials look better than their real underlying performance. In the end their managements fooled themselves. Eventually, the real situation became apparent, but far too late for anyone act to avert disaster.


Thursday, 25 January 2018

The week in bullshit, continued…

Last week I fired off a rant about how politicians misuse statistics to win arguments with little regard for the relevance or context of the numbers thereby doing extreme violence to honesty and truth telling. This week I find yet more examples from the NHS.

Here is why the specific instances constitute bullshit.

The Prime Minister said the following in Prime Minister's Questions on January 24 in response to a Jeremy Corbyn complaint about poor performance in the English NHS over winter:

If he wants to talk about figures and about targets being missed, yes, the latest figures show that, in England, 497 people were waiting more than 12 hours, but the latest figures also show that, under the Labour Government in Wales, 3,741 people were waiting more than 12 hours.

Corbyn didn't spot the problem (I presume neither he nor his advisors are any more knowledgeable about these statistics than the PM or her health minister).

But it is quite a simple issue: the Welsh NHS counts something different when it measures 12hr A&E waits than the English NHS. In Wales, the clock starts when the patient arrives; in England the clock starts when a decision is made to admit the patient to a bed. This decision is highly gameable and is highly gamed. Even without any explicit fiddling of the numbers (there are rumours that some management teams stop their teams recording a time in a timely way to reduce their reported numbers) the decision is often postponed until the hospital knows there is a bed available. This may happen after the patient has already waited 12 hours in A&E.

The English "trolley wait" metric is a terrible, useless and misleading metric. It actively distracts from a good understanding of the problem of long A&E waits. Yet here we have a politician using it to win an argument with the opposition instead of trying to understand what is going on in A&E.

Here is some help to put it in context. The comparable number is accessible from data collected from hospitals (it is trivial to calculate true 12hr waits from A&E HES data, it just isn't routinely done and won't yet be available for this winter as national HES takes a few months to compile). In january 2017 there were 46,413 true 12hr waits in English A&Es (these figures were release by NHS Digital after an FOI request). That is the comparable number May should have quoted (if she had an up to date version of it which we have no reason at all to assume would be better in January 2018). If anyone in the system cared to have reliable and useful numbers to tell them how A&E was performing, they could easily collect these numbers on the same basis as Wales enabling them to have a much better and ungameable insight into what is really happening. Guess why they don't do that.

The disease, unfortunately, runs deep. Here are some extracts from Pauline Phillip's report on winter pressures to the NHS Improvement board on january 24:

...management information for January suggests an improvement and the system is performing better than at the same point last year… 
[compare to this a few paragraphs later]
...A&E performance for December was 85.1%. This is 3.8ppt below the previous month (88.9%) and 1.0ppt lower than the same time last year...

...Performance is impacted by higher bed occupancy than last year and increases in attendances and emergency admissions...

[a few paragraphs later]
...Type 1 attendance growth compared to the same month last year is 1.0%.

[a 1% increase in attendance is actually below the long term trend in increases, though, to be fair, admissions were up by a much larger amount and they matter more. OTOH the excuse that "performance is impacted by...increases in attendances" is not the most accurate way to report the situation]

...the trend for much lower long trolley waits continues; 12 hour waits are 10% lower compared to the same time last year.

The problem here is that the text is frequently misleading when compared to the numbers quoted (always a danger when people are allowed to write paragraphs of bullshit instead of showing the clearest analysis of the key data points in context). And Philip, who should know better having run one of the hospitals with an outstanding A&E department) goes on to use the same trolley wait statistic that the PM quoted in her answer to make a claim that things are improving. It is such an unreliable statistic it tells us no such thing.

If you are going to manage A&E better you need to use numbers that are reliable indicators of what is really going on. Not terribly misleading numbers that are both a poor, misleading metric and utterly gameable. The trolley wait metric should have been burned years ago.

Maybe this point is simply not understood and the politicians and NHS leaders just don't get that this statistic is bullshit. Maybe this interpretation lowers their culpability for promulgating bullshit, but it is hardly comforting that the people in charge of improvement don't seem to possess the basic knowledge that any competent analyst of A&E statistics has known for years.

The reason why England doesn't routinely release reliable numbers about long waits in A&E is that they are very embarrassing. If they were widely used by NHS Improvement, as they should be, to understand what was really happening so their efforts could be focussed on generating real improvements, there would be a lot of bad headlines (which might be worth it if it helped but led to actual improvement). 


Sadly, in politics and organisations dominated by political management, improvement isn't the point: good headlines are all that matters. The impact of political bullshit is pervasive and corrosive.


PS. I'm not the only one who noticed. Faye Kirkland posted this on twitter just after I completed the original version of the blog. It is a letter to the Chair of the UK Statistics Authority pointing out just how misleading her comparison was. It will be interesting to see how he reacts.





















Tuesday, 16 January 2018

Political bullshit with numbers is making it ever harder to make good decisions


If governments want to make good decisions they have to have reliable data about what is happening. But they increasingly don't use numbers that way. Instead of using data for insight they use it for bullshit and undermine the evidence they need to make a difference to anything.

So the NHS is having a winter crisis. This year instead of the service responding in a panic when the unpredictable event of winter occured, the panic response was, apparently, planned. Apparently, this is good, our lords and masters said so.

But there is a little vignette that occured in Parliament that illustrates a great deal about why we have such problems and even more about the reasons why we currently look like a kakistocracy. It relates to the statistics about bed occupancy in hospitals and illustrates something profoundly disturbing about how politicians handle statistics and use numbers.

The background to the story is that the government has now mandated daily statistics about "winter pressures" in the NHS. That might not be a bad thing in itself if the point were to make management decisions in response to the numbers (though this supposes an ability to know what response to make and to interpret the numbers correctly: neither are obviously true).

One of those statistics is bed occupancy. This isn't a very useful statistic (as I've argued before) but collecting it daily is much better than weekly or monthly which is what is done for the rest of the year.

The government (and many others) have set a "target" level of occupancy for beds to ensure there are enough free beds each day to cope with demand. That target says no more than 85% of beds should be occupied.

So far so good. But the annoying doctors and opposition insist we are in the middle of a crisis in bed availability and keep complaining. In response to one of those complaints and in explaining the impact of his winter plan Jeremy Hunt said this in the House of Commons:
The shadow Health Secretary told The Independent: “It is completely unacceptable that the 85% bed occupancy target…has been missed”. What was bed occupancy on Christmas eve? It was 84.2%, so this had a real impact.

To put his claim in context, here is the chart of daily occupancy to early january (from the latest data I could get from NHS England). Shading identifies complete weeks:




Which number did Jeremy Hunt repeat? The least representative number on the chart and the only day in the whole of winter where the target was met. He also ignored the longer term context. The days near christmas have the lowest occupancy of the whole year and, historically, have often been in the 60% range.

Now maybe he was just having a bad day and didn't mean to quote something so irrelevant to the current problems with beds. But another minister said this two days earlier when challenged with a similar complaint in the Lords:
The noble Baroness talked about bed occupancy. Of course, we know that high levels of bed occupancy are a concern. Bed occupancy was below the target of 85% going into this period—on Christmas Eve it was 84.2%
I think we can conclude that this number has been shared around the government as the one to quote to deflect any complaints about the state of the NHS at winter.

Sure politicians have to win debates and this will, inevitably, involve some spin. But the way this number was brought up goes beyond reasonable spin and becomes what Frankfurt would describe as bullshit:
[the bullshitter] does not reject the authority of the truth, as the liar does, and oppose himself to it. He pays no attention to it at all. By virtue of this, bullshit is a greater enemy of the truth than lies are.
What this case illustrates is a deeply troubling view about how politicians treat statistics. They do not look to them as a source of useful information they can use to make decisions. They trawl them for any number that supports the argument they are trying to make, regardless of meaning or context. In doing this they utterly devalue their use in decision making or management.

There is an alternative explanation that is slightly less pejorative: perhaps they are so statistically illiterate they don't understand the numbers or the context. Unfortunately it isn't obvious that this explanation bodes any more optimism about the how well the country is run.

I tell this story as an illustration of a very widespread and pervasive phenomenon in modern politics. It isn't just the government, they are all at it, opposition and minor parties included. There seems to be no drive to take the effort to analyse problems before deciding key policies or actions. The process now seems to be to identify some actions or policies likely to play well in newspaper headlines or with supporters. Only then, after the key decisions are made, does anyone look at the evidence and then only to wrench some number, no matter how out of context or irrelevant, that supports their view. Even when the number is rebutterd by the highest statistical authority in the land, they will often continue to quote it (as Boris has just done with the legendarily bad £350m/week goes to the EU). Truth and context are irrelevant: all that matters is winning the argument.

This is no way to run a government. We need people in government and opposition who are competent, honest and who are prepared to do the hard work of analysis before making arguments or deciding policies. If we don't get them, and get them soon, the bullshit will overwhelm our ability to make any good decisions about anything in public policy.

Saturday, 13 January 2018

The NHS isn't over-managed

The NHS needs more money. But the belief that it needs less management or administration is nonsense. It won't spend any new money well unless it improves its ability to spend that money well. That means it needs more management not less.

The FT has a well-deserved reputation for balanced and factual commentary on the big issues. So I was surprised to see this cliche repeated in an editorial on January 5: "There are too many administrators and not enough front-line medical staff."

Other commentators constantly repeat similar untrue cliches. On Radio 5's "good week bad week" on Sunday I heard someone claim "the NHS has more managers than nurses".

It isn't true. The reality is that the NHS is one of the most undermanaged organisations on the planet. Here are the numbers from the NHS staffing system.







There are ten times more nurses than managers and three times more doctors than managers.

And the number of managers has been falling. The numbers were cut by about 30% by the Lansley reforms because he believed the cliche ("more resources to the front line" which I've argued before is one of the stupidest in the debate on NHS policy). It hasn't obviously worked.

Just for reference here is the relative numbers of different types of staff compared to their levels before the coalition government took over in 2010:







What is notable is the steady and then sharp decline in manager numbers (with a subsequent slow increase as the system realised it had drastically overdone the cuts). Also notable is that consultant numbers are rising a lot but nursing number are steady (which suggests that recent complaint that medical productivity is limited not by doctor numbers but by the lack of nurses and support staff has significant support in the actual data).




When the bill was being debated and proposed a sharp cut in manager numbers, I tried to find some benchmarks for how many managers an organisation like the NHS might need. One crude one would be to compare the NHS against other organisations in the UK. Unfortunately the ONS only counts managers in the economy as whole and not by industry or sector (about 11% of the workforce are managers according to them). So I looked elsewhere (see the original BMJ letter reporting this here and the longer version here).

In the USA charities have to declare how much of their budgets are spent on three separate categories: money spent fundraising, money spent on their projects and money spent on deciding how the run the charity and allocate their spend. That last category is the one that might help us estimate the money an the organisation spends on management. It isn't a perfect proxy but it isn't bad. Charities, like the NHS, are not in the business of enriching their chief executives and they are under pressure from supporters and regulators to be frugal so as much of their spend should go on their purpose not on overheads. But frugality has to be balanced by the need to spend money well. Spending too little on good decisions is just as bad as spending too much.

Most charities spend more than then the NHS; medical charities often spend 3 time the proportion of revenue as the NHS. If the NHS were a charity they would risk investigation by regulators for a lack of management capacity.

There is one important caveat to this analysis. When I was looking for benchmarks I was focussing on the very heavy cuts to managers in commissioning (this was the focus of the Lansley cuts). CCGs and national bodies are the groups responsible for deciding how to configure services across the country or in a particular area. They are the people who have to decide whether it might be better to spend more in the community and less in hospitals (which traditionally dominate everything in the NHS). If they don't have the capacity to make good decisions, then the NHS is in trouble as it will be stuck with the way things currently are whether that is good for the population or not. The charity vs commissioning analysis is particularly stark for commissioners who now have so little management capacity it is hard to see how they get anything done by themselves (this, perhaps, explains their extensive use of management consultants which is often complained about by people who don't seem to understand the lack of management capacity that drives it).

But the lack of management in hospitals is also a problem. Manager jobs there should be to design effective systems, to coordinate the work of front-line staff and to do the analysis that drives and sustains improvement. This should lower the burden of paperwork and admin on doctors and nurses. If there are too few managers doing the right things then bad and inefficient processes will persist, lowering the quality and productivity of all the work done by front line staff. Improvement won't happen. And the doctors and nurses will spend too much of their time on administration instead of treating patients. IT is very obvious from the overall staffing numbers that hospitals, not just commissioners, have far too few managers.

This should be more obvious than it seems. The big NHS problems are problems of coordination and operational effectiveness. The NHS has a big issue with knowing where to spend money to make the whole system better and struggles to consistently improve or to spread best practices quickly. These are managerial in any organisation and management failure or lack of capacity makes extra spending, even when it arrives, a lot less effective than it should be. If you just spend more without knowing where the big problems are, you may well not improve the problems at all. This is abundantly illustrated by the persistent failure to analyse the real reasons for the decline in A&E performance (see my analysis).

Weak management also leaves the service incapable of resisting stupid ideas coming from the political centre. For example, Jeremy Hunt's proposal to put GPs at the front door of all A&Es is an idea that any competent analyst or manager would resist because it couldn't possibly work. Lack of management capacity leaves front-line nurses and doctors working with badly designed processes and many end up spending far too much time on administration when they should be treating patients.

There is plenty of evidence that the NHS needs more money. But even if the extra money arrives it will yield far fewer improvements than it should if the people spending it are short of management capacity. It is time to kill they myth that the NHS is overmanaged. In fact a lack of management capacity is one of its biggest problems.