Friday, 3 August 2018

Asking the wrong question about GP behaviour is even worse than getting the wrong answer about it

A recent survey based on an NHS England idea suggested that 10-20% of GP appointments were avoidable. But the answer is useless as the wrong question was asked to the wrong people at the wrong point of the process. Worse, the very way the survey was framed was built on false assumptions about how GPs could work leaving the most important question unanswered: what would happen if GPs organised their work differently. It is astounding that such a bad survey was commissioned and has any influence over NHS policy.

That GPs are overloaded with demand and overworked appears to an almost unquestioned belief in the current NHS. So it should be important to understand what can be done about this. We need data. We need good analysis. We need better ideas about what to do.

So when I saw reports concluding that 20% of GP appointments were avoidable, I thought they might be the result of a careful analysis of what was going on.

But I was wary. Similar surveys of A&E attendances conclude that too many people go to A&E instead of other services. This observation is, however, useless as it fails to consider that these people are not the cause of poor A&E performance and we have no idea how to make the go anywhere else. Therefore useless for policy, unless wish fulfillment is now a major element of NHS planning.

Sadly, despite the amount of effort put into the GP survey by The Primary Care Foundation, the same is true of its results. In fact they might be worse.

As far as I can tell the key survey asked GPs at the end of a sample of appointments whether that appointment could have been avoidable. Nationally they thought that perhaps 20% could have been handled by someone else (by which they mean some mix of nurses, pharmacists or other staff). So far so good. The results might even be true.

But they have asked the wrong question to the wrong people at the wrong point in the process.

What if, instead of waiting for patients to get through the typically annoying process to get a 10 minute slot with their GP, they asked, instead, how many of the people granted an appointment actually needed an appointment to sort out their problem? By assuming that every patient interaction has to involve a 10 min appointment we have already made the strong assumption that 10 minute appointments are the only way GPs can respond to demand. And that demand can be mitigated–but only slightly–by using a different mix of staff in the practice combined with better signposting.

We have good evidence from a number of practices that changing the way GPs respond to demand can have a much bigger impact than this. Scores of practices have switched to different processes where the GP interacts with patients before booking appointments and only offers face to face appointments to those where the GP and patient agree it is required. These GPs typically find that 60-70% of demand can be handled without an appointment. In the practices that get this right the GP workload goes down substantially and patient satisfaction soars as they typically get fast on-demand responses to their problems and same-day appointments when they need them (rather than having to wait a week or two for the next available slot). See this tweet from GP Dave Triska, for example (he tweets his experience regularly and it is well worth checking out his feed).

The problem, these GPs have realised, is that the assumption that the only tool they have is a 10 minute appointment is false. There are plenty of other ways to respond to many patient requests and most of them are far more efficient that 10 minutes spent face to face. Sorting this out before spending 10 minutes in front of the patient saves a lot of time for both.

What the Primary Care Foundation should have done is to survey the incoming demand to GP practices and asked whether a face-to-face appointment was the best way to respond to that demand. By failing to do this they embedded the false assumption that 10 minute slots are the only tool in a GPs toolshed. This reinforces the false belief that there is no alternative and that the best we can do is to make minor adjustments inside the practice or, somehow, deflect the demand somewhere else.

The net result of this bad survey will be to blind GPs and policymakers to far better, more radical alternatives. That's really not the best way to get data telling us how to improve GP practice.

Tuesday, 3 July 2018

Knee-jerk ideology makes any sensible debate on the NHS virtually impossible

The NHS needs to improve. Most people agree with that. But when it comes to what specific policies or actions will deliver improvement there is far less agreement. So sensible debate about what to do would be useful. Sadly we are unlikely to ever have that debate when the response to any suggestion consists of a storm of ideological name calling.

A recent opinion piece on the BBC's Newsnight programme about the NHS has led to a storm of protest and abuse. The piece consisted of a short video by Kate Andrews of the IEA (a free market think tank whose funding is not very transparent). The response mostly consisted of arguments like this:
  1. The BBC should not provide a vehicle for right wing propaganda
  2. Kate Andrews doesn't understand the NHS because she is an American
  3. The IEA is an evil propagandist for an insurance based system and NHS privatisation
  4. The USA's health system is evil
  5. The BBC is biased
And more on the same lines.

Before I watched the video I assumed that she must have argued that the NHS needed to be financed by user charges and broken up into an american style mess run by the private sector for profit. Then I watched it. And my reaction was "what the holy fuck are those commentators talking about?"

The key arguments in the video are this:
  1. The NHS is in a state of perpetual crisis
  2. There is little appetite for radical reform
  3. Campaigners often pretend that there are only two alternatives: the NHS or the US "system"
  4. The NHS isn't on reasonable metrics "the envy of the world": outcomes are better in many other health systems
  5. The USA is a crap system but is also an extreme outlier and a meaningless comparison
  6. Many other countries (Australia, Singapore, The Netherlands, Germany, Sweden…) have universal healthcare with better outcomes than the UK
  7. Competition between a variety of providers (both profit and non-profit) is a common factor in other universal care systems
  8. Market reform in the NHS would be good
And that is it. No calls for abolishing "free at the point of use". No calls for privatisation. No praise at all for the US system.

The only contentious call is for more market reform in the NHS. The most left-wing pro-NHS campaigners agree with point 1. Point 2 is arguable but not controversial (unless you think that abolishing the Lansley act is radical which it isn't as returning the NHS to previous legal and structural model is a big change, but a conservative one). Point 3 is simply a summary of what the majority of commentators and campaigners seem to argue. Point 4 is solidly based on facts. Point 5 is also uncontroversial (though the fact that someone from the IEA has just said it ought to be interesting and pro-NHS campaigners might like to quote it a few more times in their arguments). 6 is also true but, annoyingly, rarely mentioned in the debate about what we should do to improve the NHS. Also 7, which probably explains why left-wing campaigners rarely mention 6 as it undermines the simplistic idea that provider competition is the root cause of everything wrong with the NHS. 8 is admittedly contentious and worth arguing about, though not for the reasons campaigners would normally use.

On that last point it is worth a quick diversion to see what good arguments against it would look like. The best argument isn't that competition doesn't work: it does, even in the NHS as respected and non-ideological health economists like Carol Propper have shown. But those studies showed that the benefits to quality were, likely, small (though the NHS's experiments with competition for elective procedures were not very radical). More importantly, more competition would do little to address the immediate short-term problems in the NHS even if they improved quality in the long term. We have more important things we need to tackle right now like gross underfunding, especially of capital and improvement projects.

So I can take Kate Andrew's argument and deal with it like an adult, specifically looking at what she argues for and rebutting it with logic. A disturbing amount of comment, however, looks less like a fight in a primary school playground and more like a shit throwing contest between two opposing tribes of agitated chimpanzees (seriously, read the twitter thread on the Newsnight post).

It is hard to judge whether any of the shit-throwing commentators even watched the video. I thought more might have latched onto her criticism of the US model ("even the IEA think healthcare in the USA is rubbish!"). What we actually got as a typical response was a series of ad-hominem attacks on Andrews and the IEA coupled with damning criticism of BBC bias for daring to allow such dangerous views to be broadcast (despite them clearly being labelled as an opinion piece to provoke debate and comment). Yes, the opaque funding of the IEA should make us suspicious of what they say and forensic in examining the facts and arguments they make, but it doesn't automatically render what they say as mendacious nonsense.

Much of the comment validates a point I have made before: it is impossible to discuss the NHS in ways that might help improve it because the "debate" is is conducted in clich├ęd shibboleths where all that matters is proving which side you are on. Identifying real problems and fixing them is utterly irrelevant.

If we really want to fix the NHS (and spend the government's new found largesse well) we need to dispassionately analyse what the real problems are and apply the new money to solving them. We need to look objectively at the facts and not just ignore the ones that don't entirely match whichever ideological agenda we have. The intemperate shit-throwing of the pro-NHS commentariat does not exactly encourage that debate. And the likelihood of finding good ways to make the NHS better is substantially lower as a result.

Tuesday, 6 February 2018

We should design A&E statistics to help improve the system not to make it look better than it really is

The desire to make A&E performance look good is in conflict with the need to improve it. The idea that we should report system performance (as David Nicholson argues) is a mistake. We should understand how the system actually works and measure the numbers that most help focus improvement effort.

The problem with the statistics on A&E performance goes far deeper than worries about fiddling the data to get better reported numbers. The real problem is the conflict between the political need to report the best performance and the management need to know what is happening so they can improve performance. The two are in serious conflict in a way the system has never fully acknowledged.

The political desire is report something, preferably something good. The management need is to report something useful for directing management attention to actions that will lead to improvement.

The conflict arises because essentially all the performance problems are in major A&E departments so reporting a combined number (including minor units) distracts attention from the units where performance is worst. For management purposes we might as well ignore all the stand-alone WICs and MIUs as they almost never see waits longer than 4hr. We should report their performance, but we should report it separately to avoid confusion about where the problem is.

The political argument is that emergency care is a system and we should report the overall performance of the system. But this is based on a fallacy. Originally, MIUs and WICs were intended to relieve "pressure" on major A&Es who would, as a result, show better performance. This failed for two reasons: one was that there was no evidence that they diverted a notable number of patients away from major A&Es; the other insight (only available after better data was collected from everyone) is that, even if they had succeeded in diverting patients from major departments, this would not have improved performance in the major A&Es (the divert-able patients are not the ones with long waits even in major A&Es).

What this means is that there are really two systems out there not one: major A&Es (type 1 departments) and units handling minor conditions (type 3s). They don't interact enough to be considered a single system. So they shouldn't be measured as a single system because that just creates confusion about where the problems are.

(There is a slight grey area where a type 3 unit is closely associated with a major A&E. In many of these all the minors are diverted at the front door to the type 3 unit. But this is exactly how a well functioning type 1 A&E should behave internally anyway (minors should be streamed to a simple and fast process) so it seems a little odd to separate the two for reporting purposes. This only applies when the units are co-located.)

The big conflict comes when hospitals are allowed to count remote WICs and MIUs in their headline performance. The only benefit of doing this is political: it makes the overall number look better. It causes immense confusion and inconsistency in headline performance, not least because some hospitals can do it and others can't making any performance comparisons unfair. Worse, some hospitals have wasted large amounts of management time trying to grab control of minor units so their headline numbers improve, a change which makes precisely no difference to any patient's experience but wastes a lot of management resource in pursuit of window-dressing not actual improvement.

It is unclear what the recent changes mean to all this. In the current reports it is normally possible to extract pure type 1 performance if you want to see it (though it isn't the headline number reported). If the new rules allow less transparent reporting of numbers from minor units they are clearly obfuscatory and bad for the system.

David Nicholson (the former head honcho of NHSE) doesn't agree. He claimed in this tweet that the 95% target was a system target, was was correctly set at 95% and would have been set much lower if it was intended to cover type 1 performance only. But he needs a history lesson. The original target was set at 98% and was only reduced to 95% when the incoming coalition government realised that abolishing the target completely would be a catastrophe. And there was no convincing clinical argument for the change (the original 98% was to allow for a small number of clinical exceptions–concussions, for example–where patients needed to be observed for more than 4hr before discharge). Yes, some medics thought the target was too strict, but they had no evidence-based clinical reasons for this: they believed, naively, that a more relaxed target would be "easier" to meet. They were wrong: before the target was changed the system was exceeding it; as soon as it was changed performance declined rapidly and frequently failed even to meet the new–supposedly easier–target.

It is also worth noting how the system performed when the original target was set. I extracted the chart below from my archives (I provided the internal weekly reports of national performance to the A&E team for several years after the target was first set: the chart comes from a public presentation I did; sadly I don't have access to the detailed data any more).

The system as a whole eventually met the 98% target for several years. And the type 1 performance often met the 98% until 2010. Moreover, the dilution effect of including the non-type 1 units was small when the type 1 performance was good. It is also worth noting that the decision to publish the system performance (including non-major A&Es) was challenged by analysts who knew that the focus needed to be on major A&E departments. The politically-minded civil servants disagreed and decided to publish the system performance but compromised by including the type 1 performance separately in internal reports to support more focussed improvement interventions.

While consistency is a good thing to aim for (this is the excuse for the changing rules) we seem to be aiming for a fake consistency. WICs and MIUS are not evenly distributed so consistent aggregation of numbers creates an inconsistent and unfair view of actual major A&E performance in different hospitals. By far the easier way to achieve real consistency would be to focus on major A&Es only and ignore the rest (or report their performance separately). That would be consistent and fair. But performance would look much worse.

The same is true for other, related, statistics on A&E performance. England also publishes a 12hr wait statistic, but this is incredibly misleading because it isn't an end-to-end metric. The clock starts when the hospital makes a decision to admit the patient, which many don't make until they know a bed is free meaning that the patient may already have waited 12hr before the clock even starts. Wales and Scotland start the clock when the patient arrives, which isn't gameable. This has confused the Prime Minister who incorrectly compared Welsh and English numbers during PMQs, drawing criticism from the chair of the UK Statistics authority, not least because there are about 100 times more end-to-end 12hr waits in England than the published number quoted by the PM suggested (as I explained here).

Real 12hr waits are actually measured in England just not published. NHS Digital now release them (but infrequently) and few NHS managers seem to know they exist or use them despite the important additional information they provide supplementing the reported 4hr performance. They are very embarrassing as they highlight just how badly A&E performance has deteriorated in the last 8 years. So embarrassing that I was once fired for impetuously talking about them in public (I shouldn't have done but I was naively trying to improve the quality of the debate about A&E performance using actual data).

To summarise. The real problem isn't fiddling the numbers: it is choosing the wrong numbers to look at in the first place. The need to improve performance is in conflict with the political need to report the best performance possible. We should report the numbers most useful for driving improvement (type 1 4hr and 12hr performance) not the ones that confuse us about where the problems are.

PS when private sector firms do this with their performance numbers it usually ends up in disaster. Enron and Carillion, for example, used clever ruses to make their public headline financials look better than their real underlying performance. In the end their managements fooled themselves. Eventually, the real situation became apparent, but far too late for anyone act to avert disaster.

Thursday, 25 January 2018

The week in bullshit, continued…

Last week I fired off a rant about how politicians misuse statistics to win arguments with little regard for the relevance or context of the numbers thereby doing extreme violence to honesty and truth telling. This week I find yet more examples from the NHS.

Here is why the specific instances constitute bullshit.

The Prime Minister said the following in Prime Minister's Questions on January 24 in response to a Jeremy Corbyn complaint about poor performance in the English NHS over winter:

If he wants to talk about figures and about targets being missed, yes, the latest figures show that, in England, 497 people were waiting more than 12 hours, but the latest figures also show that, under the Labour Government in Wales, 3,741 people were waiting more than 12 hours.

Corbyn didn't spot the problem (I presume neither he nor his advisors are any more knowledgeable about these statistics than the PM or her health minister).

But it is quite a simple issue: the Welsh NHS counts something different when it measures 12hr A&E waits than the English NHS. In Wales, the clock starts when the patient arrives; in England the clock starts when a decision is made to admit the patient to a bed. This decision is highly gameable and is highly gamed. Even without any explicit fiddling of the numbers (there are rumours that some management teams stop their teams recording a time in a timely way to reduce their reported numbers) the decision is often postponed until the hospital knows there is a bed available. This may happen after the patient has already waited 12 hours in A&E.

The English "trolley wait" metric is a terrible, useless and misleading metric. It actively distracts from a good understanding of the problem of long A&E waits. Yet here we have a politician using it to win an argument with the opposition instead of trying to understand what is going on in A&E.

Here is some help to put it in context. The comparable number is accessible from data collected from hospitals (it is trivial to calculate true 12hr waits from A&E HES data, it just isn't routinely done and won't yet be available for this winter as national HES takes a few months to compile). In january 2017 there were 46,413 true 12hr waits in English A&Es (these figures were release by NHS Digital after an FOI request). That is the comparable number May should have quoted (if she had an up to date version of it which we have no reason at all to assume would be better in January 2018). If anyone in the system cared to have reliable and useful numbers to tell them how A&E was performing, they could easily collect these numbers on the same basis as Wales enabling them to have a much better and ungameable insight into what is really happening. Guess why they don't do that.

The disease, unfortunately, runs deep. Here are some extracts from Pauline Phillip's report on winter pressures to the NHS Improvement board on january 24: information for January suggests an improvement and the system is performing better than at the same point last year… 
[compare to this a few paragraphs later]
...A&E performance for December was 85.1%. This is 3.8ppt below the previous month (88.9%) and 1.0ppt lower than the same time last year...

...Performance is impacted by higher bed occupancy than last year and increases in attendances and emergency admissions...

[a few paragraphs later]
...Type 1 attendance growth compared to the same month last year is 1.0%.

[a 1% increase in attendance is actually below the long term trend in increases, though, to be fair, admissions were up by a much larger amount and they matter more. OTOH the excuse that "performance is impacted by...increases in attendances" is not the most accurate way to report the situation]

...the trend for much lower long trolley waits continues; 12 hour waits are 10% lower compared to the same time last year.

The problem here is that the text is frequently misleading when compared to the numbers quoted (always a danger when people are allowed to write paragraphs of bullshit instead of showing the clearest analysis of the key data points in context). And Philip, who should know better having run one of the hospitals with an outstanding A&E department) goes on to use the same trolley wait statistic that the PM quoted in her answer to make a claim that things are improving. It is such an unreliable statistic it tells us no such thing.

If you are going to manage A&E better you need to use numbers that are reliable indicators of what is really going on. Not terribly misleading numbers that are both a poor, misleading metric and utterly gameable. The trolley wait metric should have been burned years ago.

Maybe this point is simply not understood and the politicians and NHS leaders just don't get that this statistic is bullshit. Maybe this interpretation lowers their culpability for promulgating bullshit, but it is hardly comforting that the people in charge of improvement don't seem to possess the basic knowledge that any competent analyst of A&E statistics has known for years.

The reason why England doesn't routinely release reliable numbers about long waits in A&E is that they are very embarrassing. If they were widely used by NHS Improvement, as they should be, to understand what was really happening so their efforts could be focussed on generating real improvements, there would be a lot of bad headlines (which might be worth it if it helped but led to actual improvement). 

Sadly, in politics and organisations dominated by political management, improvement isn't the point: good headlines are all that matters. The impact of political bullshit is pervasive and corrosive.

PS. I'm not the only one who noticed. Faye Kirkland posted this on twitter just after I completed the original version of the blog. It is a letter to the Chair of the UK Statistics Authority pointing out just how misleading her comparison was. It will be interesting to see how he reacts.

Tuesday, 16 January 2018

Political bullshit with numbers is making it ever harder to make good decisions

If governments want to make good decisions they have to have reliable data about what is happening. But they increasingly don't use numbers that way. Instead of using data for insight they use it for bullshit and undermine the evidence they need to make a difference to anything.

So the NHS is having a winter crisis. This year instead of the service responding in a panic when the unpredictable event of winter occured, the panic response was, apparently, planned. Apparently, this is good, our lords and masters said so.

But there is a little vignette that occured in Parliament that illustrates a great deal about why we have such problems and even more about the reasons why we currently look like a kakistocracy. It relates to the statistics about bed occupancy in hospitals and illustrates something profoundly disturbing about how politicians handle statistics and use numbers.

The background to the story is that the government has now mandated daily statistics about "winter pressures" in the NHS. That might not be a bad thing in itself if the point were to make management decisions in response to the numbers (though this supposes an ability to know what response to make and to interpret the numbers correctly: neither are obviously true).

One of those statistics is bed occupancy. This isn't a very useful statistic (as I've argued before) but collecting it daily is much better than weekly or monthly which is what is done for the rest of the year.

The government (and many others) have set a "target" level of occupancy for beds to ensure there are enough free beds each day to cope with demand. That target says no more than 85% of beds should be occupied.

So far so good. But the annoying doctors and opposition insist we are in the middle of a crisis in bed availability and keep complaining. In response to one of those complaints and in explaining the impact of his winter plan Jeremy Hunt said this in the House of Commons:
The shadow Health Secretary told The Independent: “It is completely unacceptable that the 85% bed occupancy target…has been missed”. What was bed occupancy on Christmas eve? It was 84.2%, so this had a real impact.

To put his claim in context, here is the chart of daily occupancy to early january (from the latest data I could get from NHS England). Shading identifies complete weeks:

Which number did Jeremy Hunt repeat? The least representative number on the chart and the only day in the whole of winter where the target was met. He also ignored the longer term context. The days near christmas have the lowest occupancy of the whole year and, historically, have often been in the 60% range.

Now maybe he was just having a bad day and didn't mean to quote something so irrelevant to the current problems with beds. But another minister said this two days earlier when challenged with a similar complaint in the Lords:
The noble Baroness talked about bed occupancy. Of course, we know that high levels of bed occupancy are a concern. Bed occupancy was below the target of 85% going into this period—on Christmas Eve it was 84.2%
I think we can conclude that this number has been shared around the government as the one to quote to deflect any complaints about the state of the NHS at winter.

Sure politicians have to win debates and this will, inevitably, involve some spin. But the way this number was brought up goes beyond reasonable spin and becomes what Frankfurt would describe as bullshit:
[the bullshitter] does not reject the authority of the truth, as the liar does, and oppose himself to it. He pays no attention to it at all. By virtue of this, bullshit is a greater enemy of the truth than lies are.
What this case illustrates is a deeply troubling view about how politicians treat statistics. They do not look to them as a source of useful information they can use to make decisions. They trawl them for any number that supports the argument they are trying to make, regardless of meaning or context. In doing this they utterly devalue their use in decision making or management.

There is an alternative explanation that is slightly less pejorative: perhaps they are so statistically illiterate they don't understand the numbers or the context. Unfortunately it isn't obvious that this explanation bodes any more optimism about the how well the country is run.

I tell this story as an illustration of a very widespread and pervasive phenomenon in modern politics. It isn't just the government, they are all at it, opposition and minor parties included. There seems to be no drive to take the effort to analyse problems before deciding key policies or actions. The process now seems to be to identify some actions or policies likely to play well in newspaper headlines or with supporters. Only then, after the key decisions are made, does anyone look at the evidence and then only to wrench some number, no matter how out of context or irrelevant, that supports their view. Even when the number is rebutterd by the highest statistical authority in the land, they will often continue to quote it (as Boris has just done with the legendarily bad £350m/week goes to the EU). Truth and context are irrelevant: all that matters is winning the argument.

This is no way to run a government. We need people in government and opposition who are competent, honest and who are prepared to do the hard work of analysis before making arguments or deciding policies. If we don't get them, and get them soon, the bullshit will overwhelm our ability to make any good decisions about anything in public policy.

Saturday, 13 January 2018

The NHS isn't over-managed

The NHS needs more money. But the belief that it needs less management or administration is nonsense. It won't spend any new money well unless it improves its ability to spend that money well. That means it needs more management not less.

The FT has a well-deserved reputation for balanced and factual commentary on the big issues. So I was surprised to see this cliche repeated in an editorial on January 5: "There are too many administrators and not enough front-line medical staff."

Other commentators constantly repeat similar untrue cliches. On Radio 5's "good week bad week" on Sunday I heard someone claim "the NHS has more managers than nurses".

It isn't true. The reality is that the NHS is one of the most undermanaged organisations on the planet. Here are the numbers from the NHS staffing system.

There are ten times more nurses than managers and three times more doctors than managers.

And the number of managers has been falling. The numbers were cut by about 30% by the Lansley reforms because he believed the cliche ("more resources to the front line" which I've argued before is one of the stupidest in the debate on NHS policy). It hasn't obviously worked.

Just for reference here is the relative numbers of different types of staff compared to their levels before the coalition government took over in 2010:

What is notable is the steady and then sharp decline in manager numbers (with a subsequent slow increase as the system realised it had drastically overdone the cuts). Also notable is that consultant numbers are rising a lot but nursing number are steady (which suggests that recent complaint that medical productivity is limited not by doctor numbers but by the lack of nurses and support staff has significant support in the actual data).

When the bill was being debated and proposed a sharp cut in manager numbers, I tried to find some benchmarks for how many managers an organisation like the NHS might need. One crude one would be to compare the NHS against other organisations in the UK. Unfortunately the ONS only counts managers in the economy as whole and not by industry or sector (about 11% of the workforce are managers according to them). So I looked elsewhere (see the original BMJ letter reporting this here and the longer version here).

In the USA charities have to declare how much of their budgets are spent on three separate categories: money spent fundraising, money spent on their projects and money spent on deciding how the run the charity and allocate their spend. That last category is the one that might help us estimate the money an the organisation spends on management. It isn't a perfect proxy but it isn't bad. Charities, like the NHS, are not in the business of enriching their chief executives and they are under pressure from supporters and regulators to be frugal so as much of their spend should go on their purpose not on overheads. But frugality has to be balanced by the need to spend money well. Spending too little on good decisions is just as bad as spending too much.

Most charities spend more than then the NHS; medical charities often spend 3 time the proportion of revenue as the NHS. If the NHS were a charity they would risk investigation by regulators for a lack of management capacity.

There is one important caveat to this analysis. When I was looking for benchmarks I was focussing on the very heavy cuts to managers in commissioning (this was the focus of the Lansley cuts). CCGs and national bodies are the groups responsible for deciding how to configure services across the country or in a particular area. They are the people who have to decide whether it might be better to spend more in the community and less in hospitals (which traditionally dominate everything in the NHS). If they don't have the capacity to make good decisions, then the NHS is in trouble as it will be stuck with the way things currently are whether that is good for the population or not. The charity vs commissioning analysis is particularly stark for commissioners who now have so little management capacity it is hard to see how they get anything done by themselves (this, perhaps, explains their extensive use of management consultants which is often complained about by people who don't seem to understand the lack of management capacity that drives it).

But the lack of management in hospitals is also a problem. Manager jobs there should be to design effective systems, to coordinate the work of front-line staff and to do the analysis that drives and sustains improvement. This should lower the burden of paperwork and admin on doctors and nurses. If there are too few managers doing the right things then bad and inefficient processes will persist, lowering the quality and productivity of all the work done by front line staff. Improvement won't happen. And the doctors and nurses will spend too much of their time on administration instead of treating patients. IT is very obvious from the overall staffing numbers that hospitals, not just commissioners, have far too few managers.

This should be more obvious than it seems. The big NHS problems are problems of coordination and operational effectiveness. The NHS has a big issue with knowing where to spend money to make the whole system better and struggles to consistently improve or to spread best practices quickly. These are managerial in any organisation and management failure or lack of capacity makes extra spending, even when it arrives, a lot less effective than it should be. If you just spend more without knowing where the big problems are, you may well not improve the problems at all. This is abundantly illustrated by the persistent failure to analyse the real reasons for the decline in A&E performance (see my analysis).

Weak management also leaves the service incapable of resisting stupid ideas coming from the political centre. For example, Jeremy Hunt's proposal to put GPs at the front door of all A&Es is an idea that any competent analyst or manager would resist because it couldn't possibly work. Lack of management capacity leaves front-line nurses and doctors working with badly designed processes and many end up spending far too much time on administration when they should be treating patients.

There is plenty of evidence that the NHS needs more money. But even if the extra money arrives it will yield far fewer improvements than it should if the people spending it are short of management capacity. It is time to kill they myth that the NHS is overmanaged. In fact a lack of management capacity is one of its biggest problems.

Thursday, 13 July 2017

Political management of operational performance is a catastrophe for NHS improvement

You can't take politics out of the NHS: politicians have to set the budget and raise to taxes to pay for it. But that doesn't mean they should be allowed to interfere in the details of operational management. They don't understand how the NHS works and so, when they do interfere they tend to favour appearance over substance and this throttles efforts to achieve real improvement. But Daily Mail headlines are a lot less important than the wellbeing and lives of NHS patients. This is a huge problem for both sides of the debate on the NHS.

Current political thinking about NHS problems is a catastrophe

A recent story in the HSJ was a perfect illustration of why political thinking is a curse on the NHS. The story was apparently about procurement rules, though this is misleading. The issue was that a hospital wanted to take over a bunch of MIUs and WICs. The reason was that this would enable the hospital to include their (good) performance numbers in the organisations headline performance, diluting the impact of their (awful) A&E performance.

This is roughly the equivalent of Marks and Spencer buying a profitable oil exploration company to disguise the weak performance of its retail business.

Amazingly the focus of the story was on the implication for procurement rules which should prohibit this sort of transaction for competition reasons. Lets face it, only a few diehards care much about those rules (the rest of us probably should care more but it just doesn't seem like the biggest issue for the NHS right now).

The real story here is far more significance for the NHS and how it is run. It is about how political thinking has deeply permeated NHS management and corrupted it. I want to explore the implications of that.

Let's be clear about what was proposed by the trust. They wanted to waste a ton of scarce management effort on a plan to adjust the organisation structure in their local area that would result in absolutely no benefits at all for patients. Their motivation was to be able to report better results to the NHS. Note the key fact again: the rules would let them report better performance despite the fact that absolutely nothing would change for any patient anywhere in their area. As far as I can tell nobody even thought it necessary to disguise this goal. It is as if a doctor choose to treat a melanoma by offering the patient strong cosmetics to cover up the skin blemishes. All the management effort was going to game the system not to improve it.

Some might argue that management were forced into this position by the way the rules work. After all, better reported performance might result in more money for the hospital and that has to be good, right? Bollocks. If the system rewards gaming rather than real improvement the allocation of money might as well be by lottery. On second thoughts a lottery would at least be random and therefore in some sense "fair"; allocation of money to the best bullshitters is actively harmful and destructive.

It isn't a new issue for A&E performance. It was a problem in the early 2000s when the target was first set. Then as now performance was measured across a number of heterogeneous types of unit: major A&Es (type 1 departments) are open 24hrs and handle almost anything; but there are both specialist units and a mix of Walk In Centres (WICs) and Minor Injury Units (MIUs) that only handle minor injuries and don't open 24hrs. The problem is that almost all of the bad performance is in the major A&Es. I can't remember the last time a non-major unit breached the A&E target. So, if you want to know where the problems are you need to look at the type 1 performance. Diluting this focus by allowing hospitals to quote the overall results by including other units makes it harder to see where the real problems are. The top civil servants in the A&E team originally wanted to report only the combined numbers (unsurprisingly they are always better and enable the minister to claim better performance). I argued that this was a mistake and would result in management effort being misdirected. The improvement team needed to focus as clearly as possible on the places that actually had problems. The compromise was to report both numbers internally (but let the minister claim the higher numbers in public).

The story is the epitome of what goes wrong in NHS management when political thinking pushes out good operational practice.

The root of the problem is the way politicians think. They focus on what looks good in newspaper headlines today rather than what will work tomorrow. And few, if any, take the time to explore the root cause of the visible problems in the NHS.

Perhaps we should not be too hard on them as they have to win arguments. And winning argument requires persuasive rhetoric. But the facts must come before the rhetoric. Win the right argument. In reality rhetoric is often all there is and the facts are warped to fit. This doesn't work. Reality is not influenced by propaganda. Pretending things are better than they are disables the key flows of information that tells you how to improve.

It is a problem because the political way of thinking becomes deeply embedded into the way management decisions are made inside the NHS. And when you understand the political mindset you can see that this is deeply corrupting. Good management monitors and measures the things that matter for understanding and improving performance. Political management is only interested in things that can make performance look good.

The political mindset doesn't just corrupt what gets measured; it also drives plans and actions that bear little relationship to reality. So the centre demands that trusts produce "improvement trajectories" that bear about as much relationship to reality as the athletic sex in a porn movie bears to real bedroom behaviour. There is little or no incentive for an honest appraisal of the root causes of problems which might create a realistic chance for improvement but might take far longer than the planning and reporting horizon.

The political mindset is reinforced when politicians directly interfere in operational decisions. The interventions usually seem to be as unrelated to operational reality as the reported numbers. Again, the focus seems to be how to get catchy headlines that proclaim "something is being done". Jeremy Hunt's desire to put lots of GPs at the door of A&E departments sounds good if you know nothing about how A&Es work or where their problems are; if you know what A&Es are really like it looks like the policy equivalent of you taking an antibiotic to cure a viral infection in your dog.

Running the system doesn't mean interfering in the operational details

It is a cliche that we should take all politics out of NHS management. But we can't. The budget and the taxes to pay for it have to be determined by politicians: that's how UK democracy works. To think otherwise is a technocratic fantasy (often perpetuated by wannabe technocrats whose understanding of the sources of NHS problems is no better developed than that of the politicians).

The problem I'm pointing out isn't that: it is the problem of interference in operational issues. Just because politicians set the overall budget and direction doesn't mean they should meddle with how hip replacements are done, how GPs organise themselves to respond to patient needs or how flow in A&E departments is managed.

The problem of political interference isn't helped by the way opposition politicians treat problems in the NHS. They too lack any useful operational insights into the real causes of problems. But they reinforce the idea that the government should be held responsible for the details of how the NHS is run. This would not work even if the leadership at the top had any useful insights into the operational reality of how the NHS actually worked: the organisation is just too big for any central body to know the details of why Mrs Stevens waited 12hr on a trolley in Cornwall's A&E department.

The people who could know those details are the managers and medics in the local hospital or GP practice. They have the capability to understand what is happening in their local operations and to identify what isn't working properly; they should have the accountability for fixing things that don't work.

But the very people who should be able to understand problems and fix them are undermined by the process of political management. The message from the top is that it is more important to make things look good than to actually fix them; the metrics required to measure whether things are working or not are corrupted by the central need to report good news; long term improvement is useless in meeting the short term need for good headlines. Worse, the constant focus on cliches like "more resources to the front line" demonises managers and leads to policy where we might well get more medics (more doctors looks good in headlines) but those medics can't be productive because of a lack of support staff. The lobbyists for doctors and nurses collude in this by constantly arguing that the problem is a lack of whatever staff group they represent ignoring the strong evidence that their members can't be productive in a badly designed system where they are poorly coordinated and don't have enough support staff (a recent report by the Royal College of Surgeons is a notable exception).

It is also worth noting that hospitals have very few managers (on sensible definitions only 2-3% of staff are managers). Most hospitals are have fewer than they really need. Moreover, if we look at the differences between hospitals, the ones with more managers get better results for patients and have better financial outcomes.

Managers and clinical leaders roles in driving local improvement are also strongly undermined by constant burdens from the centre. The NHS isn't content to look at transparent and well designed measures of outcomes and judge local management on whether they achieve them: instead they interfere on the most detailed levels as though the centre understands how the local job should be done (they don't). This results in a demand for local units to report a mountain of badly designed metrics and to follow detailed central guidance on how they work when local experimentation and improvement would be far more effective. Nigel Edwards once described this as:

"A significant organisational pathology"

Also observing:

"Time that should be spent dealing with problems is diverted to reporting on the actions being taken and providing reassurance that previous action plans have been executed."

When bad managers are combined with political management we get the worst of both worlds. Instead of focussing on the root causes of performance problems, the managers now focus on how to game the targets. Hence catastrophic waiting list behaviours and a dangerous spike in apparent activity in A&E departments in the minutes before patients have waited 4hr. This sort of behaviour is sometimes blamed on "targets". But not all targets are bad and the real blame lies with bad managers working in a system that rewards gaming rather than real improvement.

How could we do things differently?

Political management results in poor choices of what is measured, poor choices about what is done and a persistent inability to achieve substantive improvement. It isn't an easy problem to fix but there are a lot of things than can be done even if politicians don't change their spots.

Opposition politicians should stop behaving like the government and do some homework before proposing alternatives to government policy. Understanding what is broken in the NHS is a harder job that proposing the opposite of what the government are doing, but the reward might be to improve the quality of debate.

We could do with more transparency in how we measure performance. Both the design of performance metrics and their dissemination needs to be more independent of government. Manipulation of the metrics to lower the number of bad news stories should not be allowed (see this on how reporting of A&E performance was corrupted). Other government statistics are moving this way, so should everything in health.

If we want to generate some real improvement in the NHS the key is for local organisations to do their own thing. The central metrics don't tell you what you need to do to improve. Ignore them and measure and report what helps you gain insight into performance and quality. Focus on real improvement and the government mandated headlines will follow.

One final plea to fellow commentators: do your homework. If you comment on how messed up the NHS is but do using the same superficial cliches as the government, you are part of the problem not the solution. Current policy is often based on a search for good headlines uninformed by actual analysis of the real problems. Unless those who oppose as well as those who govern learn to break from the habit of seeking headlines rather than solutions, policy won't improve.

This is very important: good Daily Mail headlines are no substitute for the wellbeing and lives of NHS patients. Unless everyone involved in the debate (journalists, commentators, lobbyists and politicians of both sides) up their game then patients will suffer and the NHS won't improve.