Pages

Friday 9 February 2024

The NHS needs to redesign the metrics it uses for A&E performance


Getting patients through A&E in 4hr is a good goal which the NHS once achieved for the best part of a decade.. But the way this performance is calculated is a mess that needs serious revision if the system is ever going to achieve it again.


NHS performance data dump day happened early february and we got the numbers for performance up to january. They were mostly bad but we have become so attuned to bad performance they didn’t raise many eyebrows. And the combined panglossian might of the DH and NHSE press offices will undoubtedly manage to squeeze some positive messages from the detail.


We should ignore anything the press offices say. Not least because they will all be the first against the wall when the revolution comes.


And, apparently, NHSE are trying to get ministerial sign off for a new interim target for A&E performance to drive improvement. But the new target is to get 77% of patients out within 4hr, just 1% more than the current–shockingly unambitious–target of 76%.


They should be far more ambitious. And ministers should insist that the targets are redesigned as the current ones are as useful as the Fukushima nuclear power plant after the tsunami.


Here are some back of the envelope observations from the January numbers that show why major changes are needed.


The 4hr target and its problems

There isn’t anything fundamentally wrong with the 4hr target, despite what some anti-target thinkers claim. When it was first introduced many claimed it was purely an arbitrary management target and would distort clinical decisions. But this has been studied and it wasn’t true. Setting and enforcing the standard led to huge improvement.


Getting through an A&E quickly is good for the patient. And the original intent was to set a simple standard that would eliminate particularly dangerous long waits. The intuition behind this was good and we now have a great deal of evidence that long waits kill. In the biggest UK study mortality starts to be measurably larger with waits over 5hr and keeps rising with longer waits (for admitted patients). Other studies elsewhere see the same effect for discharged patients.


And, since >98% of patients did leave A&E in <4hr from 2005 to 2010 with far fewer A&E staff than the current levels, we have good evidence the target is achievable. 


But the problem with the current way the target is calculated arises because of two factors: current achievement is very poor and there are now different types of “A&E” that don’t work the same way and have very different performance.


Type 3 A&E units take about 30% of the total volume and have grown a lot in the last 15 years (some are called walk in centres (WICs), others minor injury units (MIUs) and urgent care centres (UCCs)). They don’t open 24hr a day and can’t handle major injuries or some specialist services. But, most importantly, they don’t usually have problems meeting the 4hr target and have very little impact on major A&Es unless they are co-located.


But the metric for A&E performance includes their performance even when the units have no meaningful relationship to the major A&E their performance is attributed to. When everyone’s performance is good, this doesn’t matter as the headline metric will clearly signal where there is a performance problem. But now that major A&Es often have performance below 50%, including UCC numbers create a huge opportunity for gaming and dilutes the signal identifying where the problems are.


Worse, they are not distributed evenly. Some hospitals have no attributable type 3 units; others have large numbers of them. This creates both inconsistency and an opportunity to game the headline number. In some cases hospitals have sought dodgy legal routes to “claim” control of type 3 units in order to hide how bad their persistently bad major A&E is. 


To see how prevalent this is look at this chart based on January 2024 numbers. 


The Royal Cornwall’s major A&E had a performance of just 41% but their headline performance nearly met the interim national standard once their (unrelated) type 3 performance ws included.


All the trusts in red are getting at least a 5 percentage point boost to their headline performance by including type 3 activity. IF their major A&Es were performing in the 90%s this would barely matter but only 3 trusts with big headline boosts are doing better than 65% on the major A&E performance. At those levels of performance, including type 3 activity gives a huge and unjustified boost to their headline number. For trusts in blue, the headline metric is a good approximation of their major A&E performance.


Another way of viewing this data is shown below in a chart that ranks how many points trusts headline performance is boosted by including type 3 activity:


It is hard to take a metric seriously when the headline numbers see so much adjustment from factors unrelated to the core point of having a target.


The solution is fairly simple. If we are trying to drive improvement, the reported metric should be for individual units and type 3 units should be kept separate from major type 1 units. (there is a slight complication in that, if the type 3 is co-located with a major A&E, they should probably be grouped together and this would affect some of the numbers above, but this isn’t that common). 


The performance problems are essentially all in type 1 units so a metric that focuses on only their performance should be used to identify and drive improvement. (Caveat: some clarification of definitions may be needed as well as some of the above numbers may include co-located type 3 units that should really be counted as part of the major A&E).


The problem of 12hr waits

There is another problem with using the 4hr metric to drive improvement. In its original formulation meeting the 4hr target virtually eliminated the possibility of very long waits. That is no longer true. If the standard time was set at 12hr not 4hr we would still be a long way from meeting it. Not only is the current NHS failing to get 95% of patients through A&E in 4hr, it isn’t even getting 90% through in 12hr. So driving improvement purely by looking at 4hr can miss the need to eliminate very long waits.


We have some evidence that 12hr waits continue to rise significantly while marginal improvements occur in the 4hr standard. This might suggest that some trusts are putting effort into the 4hr standard while neglecting patients who have missed it leaving them with very long waits. That is very much missing the point while pursuing the target.


While the 12hr performance is broadly related to the 4hr performance the detail suggests that some trusts are much worse at curtailing very long waits. This chart shows the overall relationship with an extra twist: it also analyses the proportion of >4hr waits that also wait >12hr (nationally about one third of 4hr breaches end up waiting >12hr but this ratio varies a lot).



So, instead of trying to set an interim target for 4hr performance it might be far more effective to start with a focus on those very long waits. Set and enforce a target for 12hr waits as the interim metric and return to 4hr only when 12hr waits have been eliminated. 


This will cause a problem for NHSE who have resisted publishing honest 12hr waits for nearly a decade (they were only forced to do so in feb 2023 because the minister insisted on it). But, given the scale of excess mortality from those long waits (which is probably in excess of 2k patients per month) this should be a major priority.


The problem of the 12hr wait after DTA metric

NHSE might object to using 12hr waits from arrival on the grounds that it already has a 12hr metric which has a long publication history. This is the longstanding 12hr wait after a decision to admit (commonly called the “trolley wait” target.)


But this metric is unreliable and gameable. This has long been known. The intent of the metric is to focus attention on long waits for admitted patients caused by delays finding a bed. The problem is that the decision to admit (DTA) is entirely gameable. Hospitals can delay the DTA if beds are scarce minimising the number of reported delays. Many patients have already waited 8-12hr by the time a DTA is made so the reported numbers seriously misrepresent long waits. The 12hr from arrival metric is, in contrast, not gameable. Historically we don’t have monthly data to compare both metrics. But annual numbers are published and the real 12hr waits have been more than 100 times higher than the 12hr DTA count. As overall performance has collapsed, that ratio has fallen and is now between 3 and 4. 


The analysis below shows the relationship at trust level between the 12hr after DTA metric and the 12hr from arrival metric. Note the variation across trusts and the fact that some trusts with a large number of 12hr from arrival waits have almost no 12hr from DTA waits.



The DTA metric is unreliable and should be replaced with the far more reliable 12hr from arrival metric.


Conclusions

There is a huge problem in how NHSE have tried to improve A&E performance and the metrics they have used is only a part of the problem. NHSE strategy was entirely focussed on the wrong causes of poor performance for a decade. And, even though the current UEC strategy (published in January 2023) admitted that mistake, NHSE still seem bereft of focus on the underlying operational problems causing poor performance. And their process improvement methods seem ricketty with little grip and few incentives to drive improvement.


But the whole process of driving improvement–even if it were effective–would be undermined by metrics that fail to correctly identify where performance is poor. Better metrics won’t fix the performance, but at least they could stop actively undermining the process.


[added after original posting] PS One additional problem I forgot to mention in the first draft of this is that the current data is reported at Trust, not site, level. Many trusts run multiple type 1 A&Es but there is no public data on the site-level performance despite many trust haveing sites with very different performance. It would be good for both the public and the internal ability of the system to understand performance differences if all reporting was changed to be site, not trust, specific. The argument for not doing this is that trusts are the legally responsible body for performance. I'd say, screw the legal niceties, we need the better, more specific, data to get a grip on performance and to be honest with the public.]





 

Tuesday 17 October 2023

NHS Digital’s Annual report on A&E performance is a mess and could be much more useful

The structure of the data as released and the interactive tool to visualise it are a mess of bad choices that get in the way of helping other analysts derive useful understanding from the data. They could do much better. 


Every year the vestigial stump of number crunchers in NHSE who still brand their work as NHS Digital produce a report on the previous year's A&E performance


The good thing is that the latest report releases far more data than has been normal in the past. There are many new breakdowns and extra pieces of data summarising what happened. And there is some interactive data visualisation. Unfortunately the effort to visualise the data screwed the pooch by being worse than useless.


There are some problems with the data as released as well but the big errors are in the choices made visualisting it. I could have some fun satirising the bad choices but, since I know that someone in NHSD reads at least some of the things I say, I want to provide some critical feedback on specific issues alongside some suggestions about how to do better in the hope that improvements can be made. 


Some of the charts contain spectacularly bad choices of what to include


Take the original version of this chart which does contain some very interesting information:


It is important to know how many patients waited >4hr (it is the key performance target) and how many waited >12hr. It is good to know the total number of patients waiting in each category and also the proportion of each as a percentage of total attendance. But the percentages (by definition numbers between 0 and 1 or 0 and 100 depending on how they are presented) are plotted on the same scale as attendance (a scale from 0 to about 15m). So, by plotting them on the same scale, the designer has guaranteed we can't see the percentages. In effect, presenting the data this way makes it impossible to see the key statistic any user needs to see. (and don't get me started on the unreadable diagonal labels: the designer could have truncated the names to be readable or rotated the chart 90° to make them horizontal and easier to read.)


To be fair, they partly fixed this chart (possibly because I pointed out the absurdity on Twitter).


I quote the original version of this example because it illustrates the problem across many of the visualisations on the site. It is as if the manager who demanded the data be visualised gave no guidance as to what was important and allowed some PowerBI developer to just dump the data into whatever default dataviz PowerBI chose without any consideration of useability or relevance.


Doing this is a chronic waste of time for both the developer and the user. The result is to distract from the data rather than to highlight the important parts of it.


There is another, more subtle, problem with this chart: the plotted values are not, as the visualisation implies, separate. The total shown for the >12hr metric is also included in the >4hr metric. Were the metrics plotted in a simple table this might not be such a problem. But in a chart like this the visual implication is that they are independent. 


Yes, we need to know those metrics, but the underlying information has other useful data about the distribution of waits. Unfortunately the data doesn't seem to contain the best way of showing the overall distribution of waits which can provide insight into the nature of the problem in a hospital's processes. 


To illustrate what could have been done here are some old charts displaying more complete data on the distribution of waits (which is present in the source ECDS data). These are based on some work first done around 2010 by some A&E experts to better understand the differences between hospitals, but which are also useful inside a single department to highlight certain common issues. 


The A&E Tzar in 2010 thought that a typology of the shape of charts like this could identify a range of common performance problems and do so early so breaches of the major targets could be corrected before they happened.



This type of chart (in this case from a poor performer in 2012) illustrate how a simple comparison of the distribution of waiting times can provide some insight. The basic chart shows the number or proportion of patients waiting for different times. Each column is a 15min block of waiting times and shows the proportion who departed with a wait of that length. Patients types are shown separately to highlight the stark difference in waits for different categories of patient. In this case there is clearly a much bigger problem with admitted patients tna for discharged patients. These patterns can signal the need to act even before the whole hospital breaches the 4hr target (what the shapes look like now nobody meets the target would be fascinating and informative).


Even the simple version of this chart (with no separation of patient categories) is still useful. Indeed, NHSD used to publish something similar for waiting times (if I remember correctly, they used 10min intervals, not the more natural 15min and grouped all >4hr waits together) but, unless I missed it, this is not available in the latest release. But all the data needed to reproduce my version of those charts is present in the original ECDS source and providing this sort of analysis of waits would have been a great service to all A&E analysts. 


But NHSD didn't release this sort of chart nor the aggregate data needed to build it for each trust. If they had it would have been a major benefit for all who care about the data but don't have access to the raw patient level ECDS records to recreate it themselves.


Other charts have the same type of problem


But, since NHSD are responding to criticism, let me try to make some more suggestions about how to do better with the other charts. 


So here is how the national level age structure of A&E attenders is presented. In this case there is also a chart where this data can be seen against the structure in a specific trust for comparison purposes. This is sometimes a useful comparison to make and it is certainly interesting to see the age structure of demand. So bonus points for knowing that this comparison is useful.




But there is a very basic problem with how the data is presented: the data isn't sorted by age (it seems to be sorted by the size of the attendance in each group, which is useless and arbitrary). 


And the chart sits beside another chart of the demographics at a selected trust. But, since the sorting on the age groups is by volume not age, the scale is sorted differently. This makes any visual comparison of national versus local age distribution impossible. Anyone needing to see the comparison would need to completely redo the charts themselves from the raw data.




And there is a commonly used way to display information like this: the population pyramid. Here is an example from a different dataset:


This is actually from a GP system and illustrates a familiar way to present population information. It has several features that can't be done with data from A&E systems as released by NHSD. The gender mix is useful to know (eg it is very clear here that more women use the service than men). The age bands are consistent sizes (5 years each) which makes interpreting the overall pattern easier. And they are sorted correctly by age which improves the consistency of the patterns. 


This is the sort of chart demographers almost universally use for exploring population structures. The ONS website has interactive versions for exploring the UK population structure.


But we can’t do this with the A&E data. We can't compare by gender as the data in the A&E dataset contains gender but not gender linked to age bands. And the bands in the A&E data are inconsistent sizes in the data. Had the NHSD data been structured differently, a standard population pyramid would have been possible, but they seemed unaware of the utility of doing this and left age separated from gender in the data release.


But we could still do a sort of simple population pyramid from the A&E data as a visualisation. And one that makes comparisons across trusts easier. In the chart below the A&E data is plotted using only minor adjustments (age bands are grouped to a consistent 5-years and ordered properly).

Were the linked data on gender present in the released spreadsheets, this could easily be converted to a more normal pyramid containing both the age and sex of the attendance.


Another useful thing to know is where the volume is coming from. The data on referral source is available. The chart for this is below:


The biggest problem with this chart is the unreadable labels and the crowding which makes it hard to see all the data at once.


There is a better way to present this data that minimises those problems. The comparisons below use a treechart to visualise the data. This has some benefits: the visual comparison across trusts is accurate but, also, the biggest volume categories are the most visible, minimising the crowding of the chart by small or irrelevant categories with low volume. (in this case further work could highlight the major categories more effectively but this default chart already does a good job of highlighting major differences across trusts):



If the chart were interactive, the unreadable labels in small boxes in the treecharts could show pop-ups identifying the group (as the interactive version of the above chart does). This would minimise the loss of information because of unreadable labels.




One default chart in the Attendances section of the dataviz tool hints that the data contains a number of different classifications of attendance. But the chart is an abomination for multiple reasons. There are far too many categories and almost none of the labels are readable. But, worse, there are at least six different categories here which are unrelated to each other (e.g. deprivation, gender, ethnicity…). Of course, totalling things across independent categories gives a stupid, meaningless total (confusingly labelled "attendances") that visually dominates the chart. 


The very least that should have been done to make this useful would have been to group the various metrics together to show only related items (eg deprivation status, attendance source, discharge destination, ethnicity). As it is the chart does more to obscure the rich data available than it does to visualise anything useful.


Some other charts in the pack do attempt the job of showing just one category from this data and allowing comparisons across trusts. For example there is an interactive chart that allows comparisons across trusts and to national data for the deprivation status of the attenders.


The national chart doesn't look too bad apart from basic readability and ugliness of the labels:



But it sits beside a chart intended to allow a comparison across multiple trusts and to the national pattern. But it looks like this:


The idea of providing comparisons is good. But to make the national comparisons, the order of the deprivation categories needs to be the same. And, while a diligent user could do a comparison among several hospitals, the overall pattern of their attenders by deprivation status is obscured because the categories have been scrambled somewhat randomly.


Here is an example from the actual data that shows what could have been done to make this more useful and coherent:


Here the comparison across hospitals is easier. A simple colour-code has been added to make visually scanning the labels unnecessary (more deprived gets deeper red, less deprived gets deeper blue). And the scale always runs in order from least to most deprived. The stark differences across the chosen hospitals in the deprivation mixes of their populations is very clear.



I could give even more examples but the ones above give a good range of illustrations of the very general issues in the NHSD visualisation tool. None of the charts have been customised in any way to make the dataviz more useful. Many end up so messy them make it impossible to make sense of the underlying data.


What general lessons can be drawn?


A general lesson for both visualisation and data structuring is very clear: it helps to know why and how the data will be used and what the best way to visualise it is.


But there are also too many examples where the structure of the data does not facilitate the sort of useful analysis that the majority of potential users might desire. For example, those who would like to analyse demographic data for linked age and gender as a standard population pyramid can’t do so as the data structure provides only gender totals but not the breakdown by age and gender. 


And, for those who might want to analyse the relationship between diagnosis, investigation and treatment they will find this can’t be done as, again, the three categories are not linked. Moreover, while the diagnosis and treatment data has ~1k distinct codes (which can be useful for researchers wanting to understand the mix of problems and treatments) the move to snomSNOMED CT coding seems to make it harder to group things together to get an overview (pre ECDS procedures were coded with OPCS codes and diagnosis with ICD10 codes which are both hierarchical systems which makes grouping related things together to get a broad category to obtain an overview much easier). SNOMED might allow this but the documentation of the coding is a mess and the links to the full data dictionary are often broken in the full documentation.


This may be an unfair criticism as many researchers will use the full patient-level records where many of these problems don’t exist, but what is the point of providing an annual summary with lots of detail but no way to link it across different categories or summarise it with existing hierarchies.


And providing a tool for data visualisation is a good way to enable users to navigate the rich data. But not if the structure of the data has been ignored in choosing the method of visualising it. And most of the choices of how to present the data are extremely bad choices. 


The dataviz tool looks like it was created by dropping data tables into whatever the default PowerBI choice of chart is (every chart seems to be a vertical bar chart whatever the data looks like). For some charts this is fine, but even when this a good choice of chart type, the details have often been mangled. There is no point in presenting breakdowns by age if the age categories are not in order, for example.


There is also vital information missing. Some data, as I demonstrated above, is extraordinarily useful for understanding performance. The shape of the distribution of waiting times is a good example. The data release could have presented the distribution of waiting times in detail. Instead of that we get some of the simple statistical summaries (median waiting time, mean waiting time and the 95 percentiles). These tell us something but showing the full distribution (in 15min or whole hour groups) would have been far more useful. 


Final thoughts

This is a missed opportunity by NHSD. They wanted to release far more detail than usual in the report. But they have chosen which details to present poorly and have failed to consider how users might want to use the data.


And in choosing to offer an interactive visualisation they pay lip service to a good idea. But, by failing to pay the minimal attention to the basic principles of good visualisation, they have created an epic fail which obscures rather than enables a better understanding of the data. It might be used in future as a cardinal example of how to do dataviz badly.


It is fixable, though. And they have already changed some of the most egregious examples on the basis of public criticism. But the whole approach needs a complete rethink taking into account what reasonable users might want to do with the data.


There are still some people around who know which parts of the data are important and how to visualise them (some in think tanks, some in the better CSUs and some independents). Heck, I’ve been analysing and visualising A&E data for over 20 years. 


NHSD could usefully consult with some of the experts to rethink the whole release and the dataviz based on it to make it far more useful.





Monday 3 October 2022

The NHS is a microcosm of the British economy

 The NHS is a microcosm of the British economy



Mistakes in how the government has managed the NHS parallel the mistakes in managing the economy. Trying to hold down the government budget is constantly approached by making easy choices rather than the right choices. The same is true in the NHS where the capital budget is raided to cover operating deficits. Both are recipes for long term decline.



All governments would like to see a higher growth rate in the economy. The current one wants to increase incentives with tax cuts but need to pay for those giveaways with spending cuts. But, faced with those spending challenges, they often take the easy road to keep the budget in some sort of balance by cutting the very capital projects that might improve growth in the long term. 


The parallel with the NHS is interesting. Growth in spending seems relentless. That growth can be constrained only by improving productivity. But the choices made to keep the budget under some semblance of control hurt productivity, making tomorrow's problems worse. In this way the NHS is like a microcosm of the whole economy, at least in the ways both have been managed in the last decade or two.


The economy

The link is explained by the factors known to affect productivity in the economy and the NHS.


As Sunak explained in his spring statement while he was still chancellor (my highlighting)


"Over the last fifty years, innovation drove around half the UK’s productivity growth.


…our lower rate of innovation explains almost all our productivity gap with the United States.


Right now, we know that the amount businesses spend on R&D as a percentage of GDP is less than half the OECD average.



Weak private sector investment is a longstanding cause of our productivity gap internationally:


Capital investment by UK businesses is considerably lower than the OECD average of 14%.


And it accounts for fully half our productivity gap with France and Germany."


His analysis is mainstream economics. But it is worth asking what governments have actually done about either innovation or capital spending over the last decade or two because the same factors matter not just in the private sector but in the parts of the economy controlled by the government.


This chart on total government spending appeared recently in the FT: 



The point is that, when faced with alternative ways to control total government spending, Osbourne chose the easy path of cutting capital spending, not current spending. And that spending on national infrastructure is the sort of thing that leads to long term improvement in productivity (and there is a direct influence on the economics of private capital spending because the future returns on that will be higher if the national infrastructure is better).


But, politically, capital is easier to cut. Who notices the long term impact of projects that might not finish for years and might only show big benefits in decades? Everyone can see this year's budget deficit. The temptation is to take the easy option even though it is the worse option for productivity and growth in the long term. Yes, all politicians, if asked, would claim they want higher productivity and growth: but they are very reluctant to face worse headlines tomorrow about the budget deficit.


Given that UK productivity growth tanked during the Osbourne austerity period, you might think this lesson had been learned. But that is not what the mood music emerging from Whitehall suggests where, in response to the catastrophic reception of the Kwarteng mini-budget, departments are being asked to make sharp cuts with capital spending at the top of the list.


The NHS

How governments have managed the NHS is a microcosm of this same problem. And it has been catastrophic for the long term health of the system.


If tomorrow's NHS is to be less of a financial burden on future governments, it needs to be much more productive (however that is defined: quality and throughput both matter in healthcare). The same factors–innovation and capital–have big influences on future NHS productivity. But how has the budget been allocated in the last decade or two? 


We can compare the NHS to other health systems in how it allocates money to the things that should matter to future productivity. The easiest to measure is capital spending. And, mirroring the problem with spending in the economy as a whole, the big picture looks to be a catastrophe of poor short term choices (for a more detailed analysis see my longer rant here). In an analysis in 2019, the Health Foundation produced this chart:



And said:


"Capital spending is a critical input in health care, with new technology able to transform services and improve workforce productivity. 


The DHSC has proposed a more technology–and data–driven NHS. New technology and IT could improve patient services and increase productivity, but both currently make up a small proportion of capital spending."

 

So, not only does the NHS get starved of capital spending in general but the mix is very light on the things that would typically have the biggest impact on productivity.


The result of this is that the capital employed per worker (an interesting measure of the stock of things that partly determine productivity) is half that of most comparable systems. 


And, according to the National Audit Office, even when the NHS gets allocated a capital budget, it frequently either underspends it or pilfers it in year to cover operating deficits. This is a perfect illustration of the political choice to take an easy path rather than the right one. And one that has, in effect, killed the hope that NHS productivity could improve enough to lower the financial burden on long term government spending. And this has been the chosen path for two decades. It is little wonder that the productivity of the NHS is falling and that the system is creaking under the strain. 


Some conservative commentators are now arguing that the government can no longer afford to keep spending more, as they need to do to stop the wheels from coming off the bus. But those commentators ignore a major  reason for the current need for more spending: the neglect of any attempt to spend the money on the long term things that would make the NHS much more productive and reduce the pressure to spend more to avoid imminent catastrophe. 


And the opposition don't help pull the debate back to solid ground by claiming everything is about staff shortages. There are two problems with this. One is that investment in better equipment and facilities could improve productivity so much the need for more staff could be reduced. The other is that the biggest reason staffing is a problem is not recruitment, it is retention and a large part of that is caused by the poor working environment some of which is caused by the lack of capital per worker. And the constant churn of staff, especially when experienced staff are replaced by cheaper but less capable staff, undermines team productivity and quality, exacerbating the need for yet more staff in some sort of anti-productivity death spiral.


So what?

And this brings us back to why the NHS is a microcosm of the economy as a whole. In order to attempt a rescue of government finances ravaged by the Kwarteng mini-budget, the key proposals to recover the government deficit currently being discussed are to cut things that are easy to cut quickly. Like capital spending. So, instead of spending on the long term things that enhance future productivity, they are likely to cut them further and in ways that damage the very growth they seek. They should have learned from the Osbourne era that that does not work. The easy path then–capital austerity–hurt the national growth rate and made it harder to fund the sorts of spending the government cannot cut if they don't want to lose their core voters (are they going to cut pensions when the most conservative block of voters are pensioners? I don't think so).


As Martin Wolf said in a recent column in the FT (my highlights):


The UK’s longer-term economic performance must indeed improve if the desires of its people for a better life are to be realised. If the government wants to do something useful about this, it might dust off the report of the London School of Economics’ Growth Commission of 2017. Better incentives are indeed a part of the answer, but only a part. This is why systematic tax reform would be desirable. There must also be difficult deregulation, notably of land use. The state must supply first-class public goods, in the understanding that these are a social benefit, not a cost. There must be fiscal and monetary stability. There must be far higher investment in physical and human capital, both public and private.


Neither the economy nor the NHS will be better tomorrow if the investment in the long term is cut. The persistent habit of picking easy cuts rather than the right cuts is a recipe for long term catastrophe (and possibly short term catastrophe too). 


Spending the money well (especially not neglecting long term investment) is the solution to the growth and productivity problem in the NHS and the wider economy. Spending it badly by making easy choices now is not.


PS that cartoon is modified from an original by the late great B Kliban. See some of his other quirky cartoons here: https://www.gocomics.com/kliban