Pages

Sunday, 28 June 2015

Governments need to get better at recognising when they are wrong

Governments and their civil service advisors need to learn how to admit failure. If they could, their policies would be better and their big capital investment programmes might be more successful. I’m not optimistic.

Apple is sometimes though of as the firm that can do no wrong. Their current position as the most valuable quoted firm on the US stock market is often ascribed to their uncanny ability to create exactly the product consumers really, really want.

This view is wrong and the reason why contains a valuable lesson for governments and their civil service advisors.

I’ll stick with the history of their most successful product to make my point.

When the iPhone was launched I was something of a skeptic on whether it would be successful. I’m a big fan of the Legendary Don Norman (famed for his work on design such as The Psychology of Everyday Things). Norman makes a powerful argument that generalist devices do a worse job of all their tasks than specialist devices. So a computer that tries to be a phone looked like it would be both a bad phone and a bad computer.

Norman was wrong (at least about the iPhone) and so was I.

So, when Apple launched the first iPhone, I didn’t want one, especially at the price. I did eventually buy one but only when the UK suppliers were clearing out their original stock to make way for the iPhone 3G. I think I paid £150, probably 30% of the original launch price. I wasn’t just a skeptic on the features, I really didn’t want to pay their OTT asking price.

The public agreed with me on the price. As, eventually, did Apple, who lopped a third off the original price and, if I remember rightly, offered refunds to some original purchasers to assuage their anger that the price had been dropped.

The important thing is that Apple learned from their error on pricing and, as far as I know, have never had to clear out significant volumes of obsolete stock for any later iPhone model.

Apple has made a whole series of about turns on what looked like set-in-stone features on the iPhone. All this despite the famously and tyrannically opinionated views of Steve Jobs. Originally they were not going to have native apps (it was all going to be WebApps or something). They changed their mind. Originally the form factor was the perfect size for the hand and wasn’t going to change (they based this on solid original research). But they have changed that twice, despite Steve Jobs declaring the original screen size to be perfect.

The point of these changes is that Apple knows how to learn. They don’t have a magical ability to get things right but they really know how to adapt. And they do that quickly. They admit their errors and change. Even while the famously stubborn Jobs was still alive and in charge, they didn’t just stick with what he originally thought of if the evidence said it wasn’t working.

Governments need to learn this skill.

There are two reinforcing pressures that prevent governments from learning. One is the nature of political promises. Political parties base their manifesto promises on the assumption that they know the answers to problems. And the civil servants who advise them when they enter government are promoted and rewarded not for solving problems but for not being seen to fail.

The consequences of failing to admit failure are large. Many (perhaps most) real world problems require a degree of experimentation. Even with the vast effort and the best available research the correct solution to a problem is often far from obvious. Apple may (in retrospect) have radically changed the world of mobile phones but they they only did so after several stumbles and they did so because they were willing to admit their mistakes and change direction.

Private firms have some external discipline to help them. If they continue to resist learning, they will eventually run out of customers and money, limiting the scale of their errors. It isn’t that they make fewer mistakes than governments, but that discipline keeps the scale smaller. Nokia and Blackberry (or RIM as the firm was once know) thought their technical superiority would help them retain market share in mobile phones. They were wrong. But their error didn’t stop the public buying phones, we just buy them from Samsung and Apple. Apple once thought they would change the world of handheld organisers with the Newton, but they had to stop making them before the losses bankrupt the whole firm.

Governments don’t have such external discipline. as a result their mistakes last longer and are bigger in scale. John Kay once described Britain’s attempt to build a new technology (the Advanced gas-cooled Reactor or AGR) for nuclear power generation as the worst public investment in the history of government. The promise was that the UK would have a world beating new technology, invented here and under our control. This could be sold to others and would be a showcase of British technological expertise (unfortunately this came true as the programme was a showcase of how bad Britain’s government is at developing and exploiting new technology). The programme ended having spent perhaps £100bn on a technology that didn’t really work, dwarfing by a factor of 10 the joint French-British investment in the supersonic flight vanity project Concorde.

The AGR programme wasted so much because there was no point where any advisor or any minister wanted to admit it was a failure. It is like a perfect case study to illustrate the sunk-cost fallacy.

Sadly governments are prone to making mistakes that are bigger than they should be because they can’t admit they are wrong. Examples abound: he National Programme for IT in the NHS; The Post Office basic bank account and benefits system; the Crown Prosecution Service’s case tracking system; the Department of Local Government’s FiReControl project to reorganise emergency control rooms and systems. Many of these went wrong for multiple reasons (as documented in The Blunders of our Governments) but an inability to admit errors made them bigger and more damaging.

Policies that don’t have much of a sunk cost also suffer from this delusional assumption of omnicompetence. Tim Harford argues in his book Adapt that, in a complicated world, the only effective way to know what works is to experiment. But this isn’t easy in government. How many manifestos say: we will reform education by trying several different ways of teaching reading and adopt the one that gives the best results? Or: we will try several experiments to test which interventions are most effective at helping people out of poverty?

But even when experiments are sanctioned by government, the motivation of their advisors may undermine their value. The problem is that proper experiments inevitably generate failures. We try several ideas and some work much better than others (if we don’t test a variety we cannot know which is best). So some of them will fail. This admission—that some interventions work better than others—is an essential part of learning. But admitting failure is not in the DNA of most civil servants (at least in the UK). When they do experiments they like to set them up so they can’t fail. Or they do their utmost to avoid admitting they have failed. Either approach utterly inhibits the ability to learn and therefore the ability to improve.

Governments have to get better at this. The world is too complex to be dominated by ideology or by those who think they know the answer before they have tested whether it works.

Friday, 19 June 2015

The NHS is clueless about how to collect and use data

The NHS is deeply, fundamentally clueless when it comes to data. Data should be the single most important resource that helps the system meet the £20billion target for improving productivity and quality. But nobody, anywhere in a position of authority seems to have the slightest clue how the single biggest public dataset about healthcare could make a contribution to that challenge. And a whole pool-pah (it’s a Bokononist word, look it up) of problems flows from that cluelessness.


There is more than enough public argument about whether the controversial care.data programme is a good thing or not. The programme planned to join up GPs’ datasets about their patients with hospital datasets about the same people. The benefits to clinical research were supposed to be enormous (and probably are enormous). But controversy erupted because the issue of patient consent was treated casually and many campaigners don’t think patient data should be used for things like research without explicit consent from the patients.


But there is something missing from the debate that neither the advocates of care.data nor the privacy activists seem to have realised.  What is important is not the secondary uses of data (which means things like clinical research and drug development or other things which exploit collective data but don’t provide a benefit directly to the individual patient) but the primary uses of the data (which are relevant to the immediate care of the patient or the immediate operations of the NHS). And when it comes to primary use of the data the NHS is like a blind man in a dark room searching for a black cat that isn’t there. And, since informed consent is only valid if the patient understands the implications of their decision, most patients will not be making informed decisions whatever they decide.


Instead of talking about the primary uses of data the entire debate has focussed on sexy, shiny research uses (like a Magpie with an obsessive compulsion for shiny objects). The positive stories mix buzzwords like Big data, Genomics, Personalized Medicine, Predictive Analytics, Molecular Diagnostics, Graphene-enabled Neurophysiology Enhancers (there’s a story here with all of those buzz words except the one I completely made up). The public remain unconvinced, perhaps because these benefits seem like a long way away from their GP appointment next week or because the gains will mostly flow to rich private organisations like drug companies who the public already think make an obscene superfluity of profits.


But the worst effect of this focus on sexy long term secondary stuff is that the boring prosaic stuff is totally neglected despite its importance to the individual patient or the management who have to run the NHS tomorrow. And the NHS is not just ignoring the uses of the data, it is failing to think about how to collect and handle the data in ways that would make it easier and cheaper to collect and more useful to those who need to work with it.


Here is a strange observation about what the public thinks. When asked whether they are happy with their health records being shared with evil private sector capitalists (like insurance or drug firms) people are often a little wary. But when asked whether they think the NHS should share their GP records with, say, A&E departments they are not only happy about the idea, they assume it already happens. If an A&E department gives you a shot of penicillin and it kills you because of an allergy your GP knows about, that’s bad. And avoidable. And a good reason to share information across the NHS. And most people assume that sharing is routine. Most people are wrong. The majority of A&E departments could not access that essential information from your GP records at the point where it is most needed. In fact many hospitals would struggle to share that information internally among their staff even if you made a point of telling the admitting medic your medical history accurately when admitted (and many people are admitted in a confused state where their own recollection of their medical history or current prescriptions is not that accurate to start with beyond the knowledge that some of the pills they take are blue and others are orange).


The public, despite the fact that too many of them read the Daily Mail, are actually fairly sophisticated in judging the benefits of joined-up data versus the tradeoffs to their confidentiality. And they seem to accept the tradeoff.


The NHS doesn’t even seem to understand the problem. The people concerned with managing NHS data have not told the public any stories about these immediate and important benefits. Nor has the NHS collectively sought to manage its data to maximize the gains. This failure has a corrosive and debilitating effect on the quality of care and the NHS’s ability to improve that care.


Paul Baumnan, the current finance director of NHS England, has monthly reports of the financial state of the NHS. So a couple of months into the year he might know that there is a worrying financial overspend in hospitals. But he will be clueless as to why as the data about what activity is happening is not accessible for perhaps another 3 months and when he gets it it will be a lot less reliable than the data about the money.


Let’s imagine for a moment there is a short term public health catastrophe like an sudden outbreak of zombies in Manchester. We could nip the outbreak in the bud and prevent it spreading by issuing AA-12 automatic shotguns and plenty of ammunition to the admitting doctors in Manchester’s A&E departments. They could neutralise the outbreak by shooting the zombies in the head before they bit any of the other patients in the A&E queue and propagating the infection (i’m assuming fit people can run fast enough to avoid them). The NHS could order precautionary supplies of shotguns for all the A&Es surrounding the manchester area in case the immediate response didn’t work.


The real NHS would not notice the outbreak centrally for a good few months (or years if it had to wait for the publication of academic analysis in the British Journal of Zombie Medicine) by which times zombies would have organised themselves into a political party strong enough to stand for parliament and pass legislation to ban shotgun sales to doctors.


OK, I’m using the zombie example as a humorous aside but the NHS faces real problems where the same issue applies. Every winter, for example, it faces an A&E crisis. Waits in A&E extend to headline-making levels. And the system moves into crisis mode in a paroxysm of effort to try and address the problem. But it doesn’t know what the problem is. It can tell the problem exists as performance is reported weekly, but it can’t tell why. Is it because patients are sicker in the winter? The NHS doesn’t know because that data is only reported months after the performance data and isn’t available to most of the people who need it. The NHS should already be able to tell that the winter crisis isn't caused because more people are turning up (winter is quieter than summer) because that is reported with the performance. But the NHS lacks the analytical capacity or will to do that simple analysis and thereby avoid vast amounts of spending on solutions that won’t work. But there is no prospect at all of really getting a grip on the problem because the detailed data about why things are slower in the winter flows slowly and unreliably months after the crisis and can only be analysed by a select few and they are never operational managers inside the system who might be motivated to fix the problem  (like before the zombies get them or they get sacked because their performance is so poor).


Other organisations don’t deal with their critical information this way. For example, a savvy fashion retailer will know that the sexy red dress it launched last week is selling like AA-12 shotguns in zombie outbreak. As a result it can tell its manufacturer to triple the production run. Next week instead of angry scenes where sex-starved women fight each other in the isles for the dwindling remaining stock, it will make obscene profits from selling vast quantities of the freshly made dresses. It can do this because its key information systems are geared to recording the right information and its supply chain is geared to responding to the resulting analysis.


The NHS, despite being concerned with much more important things than fashion (like life and death), doesn't bother setting up its information collection or analysis to be able to do this. Doctors apparently have far more important things to do that working out whether the treatments they issued last week are killing people. Instead of being able to collect the right data, share the right data and analyse it so it does a better job of delivering high quality care next week than it did this week, it carries on with unreliable, paper based recording of the wrong, poor quality data and then doesn't bother to analyse it for any purpose at all for months if not years. Then it brings in new rules to minimise the possibility of any evil drug company using NHS data for commercial gain which has the side effect that nobody else (even the independent researchers whose motives are pure) can assess whether that company’s existing products are killing people or curing them. And the NHS can happily reassure patients that their data cannot be accidentally leaked in ways that breach their confidentiality (while forgetting to mention that it can’t be used to save their life or improve their care either).


OK, i’ve drifted into a rant. But there is a serious point here. The NHS doesn’t seem to have much of a clue about how patient data could be used to run tomorrow’s NHS more efficiently than today’s or how to make the quality of tomorrow’s care better than today’s. As a result it doesn’t organise the data it collects in ways that minimize the time or maximise the quality. And it makes the data hard to share for any use not just the distant secondary uses that worry patients.


There is a £20bn gap between the likely future budget for the NHS and the expected spending needed to keep the system functioning at current standards. The best way to bridge that gap is to use patient data routinely to improve the quality and efficiency of care.


This can’t happen while the NHS neither understands what those primary uses of data are nor invests in collecting or analysing the data.



Thursday, 11 June 2015

The blurred line between service improvement and direct care in the NHS

A recent story in the i-don't-care-about-the-facts-as-long-as-the-headline-attracts-attention online version of the Daily Mail claimed the NHS was about to violate patient confidentiality in the name of cost-cutting and rationing.

A flavour of the content:
NHS bosses are to trawl medical records of tens of thousands of patients to find out who is costing them the most money.
They will identify which individuals frequently see their GP, go to A&E or are on lots of prescription drugs with a view to ‘reviewing’ their care, and trimming their budget...

But concerns have been raised that the information will be used to drive down costs by rationing certain treatments or urging GPs not to refer patients to hospital. There are also worries that bosses will urge expensive patients such as the elderly to buy-in extra home-help or even move into a care home.

It has striking similarities to a national data harvesting project which has been put on hold after concerns were raised that sensitive details would fall into the wrong hands.

The NHS’s Care.data scheme was meant to begin last spring with information from millions of patients’ due to be uploaded so it could be analysed by experts to look at trends.
The story originated in the primary care journal Pulse which took a slightly less tabloid tone but argued much the same points.

One thread of comment on the story on twitter ended up in an ill tempered argument with one of the guys from MedConfidential (the antagonists were both a little ranty, to be fair). Here is how it started:


The issue that is worth expanding on at lengths unavailable on Twitter is the issue of what counts as secondary use of data and what counts as data for direct care. The Twitter thread ranged over many issues but care.data came up a lot and the issue of patient consent for secondary uses was a major topic. 

Phil Booth, I think, believes that there is a very clear dividing line between data used for direct care and data used for secondary purposes. I don't, and I think the Southend story is a good illustration of this (though I have only read the newspaper stories so I can't be 100% sure).

Before I get to the meat of the argument, though, I should say that I actually agree with MedConfidential that patient consent is important and the system should not causally ignore patient wishes. I disagree that the risks of sharing to patient confidentiality are high or that the harms that could arise are large, at least when using careful controls over the use of the data. And consent is supposed to be informed which is hard to achieve when so much abject nonsense is talked about the potential risks and harms of systems like care.data. All data sharing involves some tradeoffs of risks versus benefits and the current NHS leadership has done a remarkably bad job of explaining either side of this.

The big problem Phil worries about, I think, is that while patients mostly don't mind the system holding records for the purpose of treating them, they sometimes object to the use of their data for research or commercial purposes. For example, it is possible to use pseudonymised data to test whether overprescribing of anti-ulcer drugs puts more people in hospital from heart attacks (this is a real research story from PLOS 1). This is relatively unobjectionable. Or, commercial firms could use similar datasets to produce more accurate insurance premia based on your postcode. This worries a lot more people. Some patients don't want their data extracted for either of these purposes, perhaps due to the perceived risks to confidentiality. The important point is that both of these uses of data are clearly secondary: the data isn't being used directly for the treatment of an individual patient.

But the Southend case is a lot blurrier. The local CCG wants to identify patients with frequent visits to A&E or large numbers of prescriptions and so on. Phil argues that this is a secondary use of the data and that many patients might object. The tabloid bullshit spin says it is all about rationing and other evil purposes putting it on a par with torturing kittens or something.

But consider this. Many of the frequent fliers (a somewhat derogatory term for patients who turn up a lot) are using NHS services frequently because something has gone wrong with their care. Asthmatics, for example, sometimes end up in A&E because they have acute exacerbations of the disease and can't breathe properly. But the reason is frequently because they are not properly trained to use their medication. Some don't use their inhalers properly and some are just on the wrong mix of drugs. The issue about identifying those people is not so we can ration their care; it is so we can improve their care. If we give them the right care in the first place they won't need to come to A&E so often: they benefit and the NHS benefits.

We can identify these patients by extracting their GP data and joining it up with hospital data. When we know who they are we can do something about it and improve the care they get. The question is: does this count as a use of data for direct care, or is it a secondary use? Phil Booth seems to be worried that the use is secondary and that patients might well object to the data being shared for this purpose. I think that this case shows that there is no clear dividing line between the two. (I hope we would both agree that the joined-up data should be handled very carefully so accidental breaches of confidentiality do not occur).

The use of patterns in data to identify problems with care seems to me to be something few could have an objection to. It clearly benefits individual patients and the care they get despite the grotesque Daily Mail jeremiads. But this use of data site somewhere between a clear use for direct care (your doctor looks at your past prescribing records before issuing a new prescription) and a use for secondary research (we look at millions of records to test for a excess number of heart attacks in people taking an ulcer drug). 

To me (and I'm a data scientist so perhaps I'm biased) the use of data to improve the NHS is important. It might even be the most important way to meet the efficiency targets in the 5-year Forward View. But these benefits form a continuous spectrum from the direct to the secondary and pretending there is a clear dividing line just make achieving the benefits harder. If we explained these benefits more carefully we might be able to have a more sensible conversation about whether patients should consent to their data being shared.

The Southend experiment isn't (I hope) some Orwellian plot to abuse confidential records but a sensible approach to doing a better job of care for the patients who need it most. Besides, despite what Pulse claims, plenty of other health economies are already doing the same thing (as far as I know).

I'm mystified how anyone thinks we can run the NHS effectively at all or keep improving it if we don't use data this way. 

Tuesday, 9 June 2015

Paranoia about NHS data sharing is not grounded in reality

A recent Guardian story has caused yet another kerfuffle about data privacy and the care.data programme. The story has been widely repeated elsewhere, stoking paranoia about the incompetent government's inability to keep our medical records private even when we object to the sharing of the records to third parties. But the story is inaccurate, paranoid bollocks as any journalist with the wit to read the original source should have realised.

The original Gruaniad story was headlined:
NHS details released against patients' wishes, admits data body
and claimed:
The body responsible for releasing NHS patient data to organisations has admitted information about patients has been shared against their wishes, it has emerged. Requests by up to 700,000 patients for details from their records not to be passed on, registered during preparations for the creation of a giant medical database, have not been met.

Many other media outlets repeated the same story, often just referencing the Guardian. ArsTechnica, for example, told it like this:

The Health and Social Care Information Centre (HSCIC) has admitted to MPs that the medical details of 700,000 patients could have been shared to organisations and companies, despite the fact that those patients opted out from NHS England's medical database Care.data.
Their source was the Guardian. They proceeded to traduce the competence of the HSCIC and stoke more fear that the system is incapable of handling data confidentially. They also repeated the accusation that the data had been sold to insurance companies though the Guardian subsequently redacted this claim from the online article.

The problem is that the author of the story was too keen to get a shock headline to bother to read the actual evidence presented by Kinglsey Manning (the HSCIC's chair) to the Health select Committee. The crucial paragraph in the actual letter says:

In February 2014 the Care.Data Programme was 'paused' and since that date no data extraction from GP systems has been undertaken as part of that programme. In the absence of any such extraction, the HSCIC had no information from the Programme, either on the objection preference of any individual who has registered a Type 1 or Type 2 objection with their GP, or the number of individuals who have done so.
The rest of the clarification points out that the real problem isn't about data being released when patients have objected (care.data hasn't released any data, i'll repeat that in capitals for other journalists who assume the Guardian does fact checking, NO DATA HAS BEEN RELEASED). The problem is that the way objections were originally going to be processed would result in patient data not being shared with anyone, even things like national screening programmes who need to contact patients to tell then they are due for screening. In other words it is hard to differentiate your objection to sharing your data with evil private capitalists or researchers from the need to share your data with beneficent public sector doctors who need your data to give you better care. And it is a future problem not a current one as the HSCIC hasn't started the programme properly yet.

My purpose here is not to pretend that the HSCIC is totally competent and whiter than white. They are not. The care.data programme has not been handled well and their administrative processes are a mess. To be fair on them this is mostly because they are dreadfully underfunded because, after all, why would we need high quality data to work out which drugs work, or which surgeons don't kill too many patients, or which ways of running the NHS lead to higher quality care: we can rely on trading clich├ęs via newspaper headlines for doing that. And newspaper headlines don't cost the government money, unlike high quality secure data.

But it is worth pointing out that, for all its faults, the HSCIC has never harmed patients by releasing their data to people who shouldn't have it. The Partridge Report on the shoddy administration of HSCIC data released couldn't actually find any examples of harm. Yet the HSCIC take the flak even for things they didn't do (GPs and Hospitals leak your identifiable personal data fairly regularly in ways which are sometimes harmful yet little attention seems to be devoted to that serious problem in comparison to the acres of newsprint devoted to care.data. And the Health Select Committee tends to blame the HSCIC for leaks cause by GPs and Hospitals even though it clearly isn't their fault).

The HSCIC needs to do a better job. Public confidence in the use of medical data is vital to ensure the NHS has the data necessary to improve care and become more productive. And it should get the funding it needs to do whatever it takes to achieve this.

There is plenty in the various repositories of NHS patient data that can be used to improve the NHS. And the NHS is not so close to perfection that it doesn't need to improve. The HSCIC has a vital role to play in making joined up data available to the people who will analyse it to drive better quality and higher productivity. they need to do a better job of making data available while keeping it secure and they need the funding to achieve those goals and the funding to persuade the public that they are balancing confidentiality and improvements to care.

But it is shoddy journalism and poor research to accuse them of releasing data against patient choices when they haven't released any data. And this paranoia seriously damages the ability of the NHS to learn how to improve the quality and productivity of the care it offers.

Monday, 8 June 2015

Baumol is (sometimes) wrong: service productivity can improve

A recent article in the Financial Times about the problem of British economic productivity (which has been remarkably poor recently) conveniently summed up one of the possible reasons by fingering services. Services are a rapidly growing part of the British economy and allegedly suffer from what economists call Baumol's disease. This is when the service is highly dependent on people and can't be automated or much improved by the application of technology or other forms of capital equipment.

This has been used by others to explain why the NHS doesn't see strong productivity growth, despite the desperate need for improved productivity. For example, John Appleby of the Kings Fund think tank said this in the BMJ in 2012:

the prices of the inputs to healthcare have tended to rise in line with, or even faster than, costs in the economy as a whole—a reflection of the “cost disease” identified by William Baumol in labour intensive industries where the productivity increases that could offset rising pay costs are hard to achieve.

 The FT's summary of the British productivity crisis summed up the Baumol problem like this
In the 1960s the economist William Baumol noted that the productivity of a live Beethoven string quartet could not be higher than that of 100 years earlier. This effect results in higher productivity growth in manufacturing rather than other sectors. The move towards advanced services sector economies implies slower overall productivity growth in the medium term...
Applied to the NHS this is widely thought to imply that, as long as during and doctoring require people, there is a limit to how much we can improve productivity of healthcare.

But the example in the FT article trigger this though in my head: Baumol is wrong. And not just a bit wrong, very, very wrong.

The key argument is that you can't perform a Beethoven string quartet with fewer than 4 musicians. So how can the productivity improve? QED. But this depends on how you look at productivity.

London's Wigmore Hall is one of the best worldwide venues for chamber music. It seats 545 people. So a string quartet can serve beautiful music to about 500 people in one sitting. To do 1000 people they need to perform 2 concerts; to do 1,000,000 they need to do 2,000 concerts (or a concert every day for the best part of six years). So it looks like Baumol wins. They can't be more productive than that.

Except they can. If the issue is how many people can listen to the music, the modern world has an option not available to Beethoven when he wrote the music: we can record it and broadcast it. A single concert in the Wigmore can be live-streamed to the internet where it is trivial for a million users to listen at the same time (it's not quite the same experience, but then driving a car is less fun than riding a horse, but from most points of view it it is way better for the economy and few people still rely on horses). To me, a million listeners is a productivity improvement of more than a factor of 2,000 for string quartets. Baumol is wrong and by a very big factor.

This doesn't apply directly to the NHS: you can't live-stream a hip replacement (well, you can, but it doesn't get more hip replacements done). But there is plenty of stuff in the NHS where modern technology could be used to greatly improve productivity. We could, for example, use technology to spread good ideas far more quickly. One surgeon finds a better way to do cataract operations very quickly and spreads the methods across the NHS online. Or we create online patient records that hold all of a patient's medical history so no medic has to waste time asking about it every time they see the patient.

But the NHS doesn't do even these obvious things, or at least it doesn't do them across the whole system. It underspends on technology; it uses paper when computers would do a better job; it does boring stuff manually when automating it would free up staff time to care for patients.

Baumol is wrong and the sooner we realise it the faster we will improve NHS productivity.


Sunday, 7 June 2015

Information is more beautiful if you do dataviz right

This is a reworking of some Guardian datavizualisation work on UK mortality statistics where I though their dativz was less than ideal. It was one of my first attempts to use Tableau so it could still do with some improvements. But I include it to show that a little work can often yield big dataviz improvements.

Misunderstanding the causes of the 2013 performance crisis in English A&E departments

Another analysis I wrote during the 2013 performance crisis in England’s A&E departments. Jeremy Hunt had just blamed the withdrawal of many GPs from providing out of hours (OOH) services for “rising attendance” and a whole range of other purported causes were being suggested. The debate since then has shifted but still lacks much grounding in actual facts. So what I said then is still worth reading.


I will update some of the analysis here in new posts when I can to show whether new evidence changes the picture (though it mostly doesn’t).


Apparently, our hospitals are being swamped by too many patients turning up in A&E. And the busy A&Es are now becoming slow and annoying places to go with the largest number of patients waiting more than 4hrs for a decade or so. It is about time something was done, is it not?


Indeed many people have weighed in to the debate with suggestions about exactly what needs to be done which range from Jeremy Hunt’s idea that the GP contract needs to be renegotiated (so they have to take back responsibility for out of hours services), through the idea we need to re-educate patients about what an emergency is, to the idea that we need a large increase in medical staffing to cope with the tsunami volume of attendances. And now there is a row about how much money needs to be spent to fix the mess.


Trouble is, nobody seems to have looked at the data to check what the problem is or when it started. Commentators are now behaving a little like doctors who prescribe treatments without either seeing the patient or checking their medical history.


While many commentators went straight to solutions, John Appleby and team at the Kings Fund bothered to look at the headline numbers (their numbers and a simple chart illustrating the mistake are here.




They pointed out that the apparent large increase in A&E volume since the GPs started opting out of OOH services was an artefact of another change that happened at the same time: we started opening new Minor Injury Units (MIUs) and Walk In Centres (WICs) and counting the numbers attending those. These don’t open 24 hrs a day so probably don’t deal with OOH refugees from GPs. They have attracted a large number of patients while the attendances in 24hr major A&Es has not changed much. This single piece of analysis undercuts any blame being passed to the GPs for the current problems in A&E.


Most other commentators have been misled by looking only at the total and assuming that major A&Es are being swamped. Interestingly, the Department of Health were advised not to focus on the aggregate data in the mid 2000s precisely because doing so obfuscates the location of performance problems; they chose to present the—better looking—aggregate only.


But there are further subtle versions of that theory being repeated. The HSJ reported on Thursday May 16 that an extra million people had attended A&E in 2012-13 compared to the previous year. But they forgot to put this in context. The rise was almost entirely the result of high attendance in the middle of 2012, so clearly unrelated to the current performance problems. Performance was much less of a problem in those weeks in 2012 with exceptional attendance. Here are the weekly numbers (total and major attendance):
It is also worth looking at the more recent data in more detail. We looked at the weekly performance of and the weekly attendance at major A&Es. There is no apparent relationship between performance and the national volume in major A&Es where the performance problems exist:


People interested in the numbers might also want to look at the scatterplot of national performance versus attendance here as this makes the lack of relationship between the volume and performance even clearer:




Note that weeks in 2013 are highlighted in orange clearly showing that poor performance is not associated with the volume.


We also looked at the same analysis for many individual trusts (an analysis we built in to the NHS Commissioning Board’s NHS England’s Integrated Intelligence tool). A few show vague relationships but most show nothing strongly suggesting that volume of attendance causes the performance problems.


Analysis taken from the more detailed HES dataset (which records when patients arrive, but is less recent) and some individual trusts also confirms that the problems don’t look related to out of hours care. Not least because A&Es are not that busy out of hours and that most problems occur during the day when GPs are open. Nor is there any indication that particular groups of patients, such as the infirm old are suddenly arriving in larger numbers.


What we can tell from this detailed analysis, though, is that the patients needing a bed are the biggest problem. We can also tell from the time of day and the day of the week that the problems occur that this has some relationship to how hospitals manage their beds.
So What?
So, overall, we have a performance problem but the plausible stories we have been discussing are not obviously compatible with the actual data. If they lead to actions or even policy changes then a) the changes won’t fix the problem and b) we have engaged in an act of story driven policy not evidence driven policy.


We have been here before. In 2005 the Healthcare Commission published a report analysing the factors related to A&E performance. Its main conclusion was that the obvious things that look like good explanations of poor performance (like too many attendances, too few staff…) don’t explain anything.


I said the following in a BMJ article at the time:
"…there is no relation at all between staffing levels and performance. Nor does any relation exist between changes in staffing and performance. None of the intuitively “obvious” factors that might be thought to influence performance seem to matter much.
… the way a department is organised has more influence on its performance than even major changes in staffing. In other words, management matters. And just increasing resources is a poor way to fix performance problems."
This incited much incredulity despite being an accurate summary of the regulator’s work.
We seem to have arrived at the same point again. We are identifying problems and proposing solutions neither of which are consistent with the evidence while ignoring known practices that work. Another report from Nigel Edwards agrees with this analysis.
I have no magic bullet to offer. But stepping back and looking at the data before proposing solutions based on nice theories that are inconsistent with the facts would probably help.


Here are some useful thoughts (based on years of A&E analysis and observation) that might help focus the debate:

  • Just because the problem manifests in A&E doesn’t mean it is an A&E problem. If the problem is finding beds for emergency admissions, that is a hospital bed management problem not and A&E problem. Adding more doctors to A&E will make no difference at all to this.
  • The plausible stories being told assume causality runs from volume to long waiting times. This feels right, but careful observation suggests it might be exactly the wrong way round. Long waits mean there are more people in the department so things feel busier despite the volume being perfectly normal. This is at least consistent with the statistics unlike the idea that too many attendances make things busy, which is not.
  • Slow A&E processes don’t mean staff are not working hard enough. Poor processes are a problem of coordination across the staff and the different departments in the hospital. Poor coordination makes life worse for both staff and patients. This is, and I hate to use a dirty word but it is important, a management problem. A failure, for example, to coordinate hospital discharges (which the hospital can control) with the pattern of A&E arrivals (which they can’t control, much) will lead to long waits for many patients in A&E. A failure to segregate processes for patients needing simple treatments from those needing more medical time, will lead to long waiting times for all and much wasted medical effort that does nothing for the clinical quality or patient experience.
  • While there are bed-blockers using up valuable beds because their social care is uncoordinated with their hospital care this is unlikely to be the primary cause of A&E blockage unless every hospital bed is blocked. In Most hospitals about 20% of the patients in beds will leave on any one day. The typical hospital will do a discharge round in the afternoon allowing consultants to sign those patients as fit to go home. Mostly those patients will be fit to leave in the morning but will occupy a bed for the day. Most hospitals where any analysis has been done could easily accommodate all their A&E admissions just by discharging patients first thing in the morning; most still don’t. Many have no clue as to who is in what bed or when they should leave (imagine if a hotel had to send porters to check whether any rooms were free when new guests arrive, that is the typical hospital).
  • The NHS’s disdain for “bureaucrats” (which is an even dirtier word for “management”) leaves the system blind to many key causes of performance problems. Getting medical staff to work in a coordinated way across the whole hospital so the overall system works well is a management problem. Good management can make a huge difference to clinical quality, patient experience and the quality of the working day of all the medical staff. But improvement isn’t going to happen if we keep misidentifying the problem and assuming that management is a parasitic burden on the medical staff and not a lubricant to smooth their work and make it more effective.




What's wrong in England's A&E departments?

This was originally written during the winter 2013 A&E performance crisis in England’s A&E departments. Most of it still applies.


A&E departments in England are currently in crisis. Patients are waiting too long and some departments are even claiming they can no longer guarantee safe treatment. There has been an orgy of speculation as to why from journalists, commentators and politicians.  But most are acting like a doctor who neither sees the patient or checks her medical history before recommending major surgery.


Simply put most diagnoses and their proposed treatment are not compatible with the basic facts. Lets look at where the blame has been laid and see what the evidence says. Here are some of the more common proposed causes:
  • Rocketing volume of attendance at major A&E
  • GP OOH contract leading to more attendance at A&E
  • More ill casemix driving more admissions
  • NHS 111 sending more people to A&E


Solutions that have been proposed include:
  • rewriting the GP contract
  • major adjustment to the marginal tariff to reward A&Es with extra volume
  • Rethinking OOH care to direct more away from A&E
  • Lots more staff in A&E


But most of these problems and their supposed remedies assume that volume is the problem. But it clearly isn’t. The performance problem is concentrated in 2013. But that period hasn’t seen particularly high attendance. the year 2012/13 was high, but most of the excess volume was concentrated in the middle of 2012 when performance was OK (a failure to look at the weekly data seems to have misled many commentators). There are a lot more people classed as A&E attends now than when the GP contract was signed, but almost all of the large increase is in minor injury units and walk in centres, not major A&Es (again commentators have confused the two by failing to look at the detail). Major A&Es have not seen notable large increases in attendance over the period and the weekly attendance has no relationship at all to performance.


These facts alone should be enough to absolve the GPs of any blame. And they also suggest that NHS 111 isn’t at fault. More importantly, none of the proposed remedies that are designed to curb volumes or provide extra money for extra volume will have any effect on the crisis.


Far too many experts who should know better have interpreted the key symptom incorrectly. They assume that a busy A&E is a sign of increased volume. It isn’t (at least in this crisis). When A&Es treat patients slowly (for whatever reason) they become busy even if the volume doesn’t change. The naive observations that volume is unsustainable have got cause and effect the wrong way round.


So the big question, and the one that has to be answered correctly to solve the crisis, is why are A&Es so slow?


Here is an idea that has the benefit of being entirely consistent with the known facts and is compatible with many detailed observations and statistics from A&Es (either collected directly or from the HES dataset which, unfortunately, isn’t yet available for the last few months to prove the point definitively). It’s the damn beds.


The evidence that points to the problem being about beds comes from several observations. The patient subgroup that spends the most time in A&E is the group who are eventually admitted. There is also some evidence that the larger the number of admissions the slower the A&E (but there is a problem about whether this is cause or effect as rushed decisions often lead to larger number of unnecessary admissions). We know that far too many decisions about admission are made at the last minute (this manifests as a spike in the waiting time for admissions just before 4 hours). And we know from looking at the waiting times across the day and week that the worst performance usually comes when beds are busiest.


A few commentators have pointed to beds a part of the problem. But too many have naively accepted the plausible excuse that this is caused by bed-blocking chronic patients waiting for social care to sort out their transfer. This may well contribute to the problem, but it can’t explain it all. Most hospitals are not actually full of bed-blockers and still manage to discharge 15-25% of their patients on a normal weekday. These patients will usually be fit to go home at the start of the day but many will occupy a bed until the afternoon bed round. This means that the discharges come at the worst time of day to accommodate the needs of the A&E admissions. Small changes in discharge patterns can often free up more than enough beds to meet the needs of A&E, but few hospitals have made the change.


To summarise: most public discussion and most policy fixes assume the problem is related to volume and assign blame to the GP OOH contract or NHS 111 problems. But the data clearly shows it isn’t their fault. It is also probably not entirely the fault of the A&E departments but of a hospital-wide failure to coordinate discharges with admissions.


We could spend the next six months funding new staff in A&Es, renegotiating the GP contract, redesigning the A&E tariff and fixing NHS 111. And the core problem would still be there.


Or we could pay attention to the data, diagnose the problem correctly and fix it.

Why big, centrally-driven, IT projects fail (especially in the NHS)

Big IT systems developed top-down fail for the same reasons that centrally planned economies fail. Central planners have few clues about how to improve the work of the doctors and nurses on the shop floor. Better to let people solve their problems bottom up.

[Note:This article was originally written just after the coalition government announced it was dismantling the giant programme the Blair Government started in an attempt to speed up the computerisation of the NHS, the National Programme for IT (NPfIT). But the lessons are still true.]


So, finally, the government is facing pressure to put a stake through the heart of the Flagship NHS IT project. Uncle Tom Cobbley et. al. are wading into the debate with what they think should have been learned. Most of these explanations will be wrong, some might contain partial, biased versions of the truth, none will be useful in preventing the next big epic fail.


The real lesson is nothing to do with IT, not about who should have been consulted, doesn’t relate to who was the responsible officer and won’t be fixed by more ruthless contracts or better project management, thought these will all be suggested as solutions. The real lesson is one that government needs to learn for many of its big projects, not just its big IT projects.
The real lesson is also nothing to do with the project having the wrong goals. Most of what it was intended to achieve is highly laudable and would benefit the NHS and the health of the nation if it was achieved. It is hard for many outsiders, for example, to imagine how the NHS functions at all without shared accessible computerized clinical records. Providing effective patient management and clinical record systems for hospitals can’t be bad.


But how should this be achieved? The Blair Government was persuaded that the most effective way to do it was to plan it from the centre. This would enable economies of scale, guarantee system compatibility across the country and enable the buying power of Whitehall to cut better deals with big powerful suppliers who might be expected to bully weak hospital management into wasting money. And these benefits were mostly achieved. Several big firms withdrew such was the ruthless pressure on performance and cost.


The problem was that economies, cost effectiveness and compatibility are a lot less important than systems that actually work for the people who use them. And this problem is multiplied many-fold when the users are as diverse as the NHS. Even within a single hospital it is hard to satisfy all the departments with a single approach (example: users in A&E need to do many small things quickly, a system with a 30s delay for login and user authentication is essentially useless to them, yet many systems are designed that way as most users in the rest of the hospital don’t mind). Yet the centrally driven plan essentially tried to satisfy everyone with a system designed centrally. But nobody in the centre can ever get this right especially when many of the real needs only manifest when users start using the system. And this isn’t the sort of problem that can be solved by more extensive consultation with the users: they may not have a clue what they need and may not find out unless they are fully engaged in testing. Many of the best solutions might not emerge until there has been a large amount of experimentation but that is anathema to a centrally driven project.


Big enterprises driven from the centre have always suffered from these failures. It is the same reason that centrally planned economies are the ultimate epic fail in economic history: whatever the benefits in theory of a system without the messiness, costs and inefficiencies of a pluralist market economy, they don’t work in practice. When central plans fail, they fail for the whole economy (or the whole NHS) and they take longer to fix as the feedback that things are not working is suppressed throughout the system as nobody is rewarded for admitting the system doesn’t work. It isn’t that the players in pluralist economies don’t make mistakes: any given business is just as likely to screw up as any government planner. But, in a pluralist economy, there are many businesses so one failure doesn’t screw the whole system. And businesses are disciplined by their customers: if what they make doesn’t sell there is no escaping the failure, so they spot them and correct them faster. Because there are many experiments, there is more information about what works and what doesn’t work, innovation that works is rewarded and information about failures spreads rapidly. The gains in rapid improvement vastly outweigh the inefficiencies that come from the smaller scale and the coordination problems with multiple parties. Disciplined pluralism thrashes centrally driven planning every time by a large margin.


It might sound like this is some abstract economic bullshit and can’t apply to big NHS systems that need to be coordinated and need to have minimal standards of quality for the good of patients. But it isn’t. And there is even an example where a decentralised approach worked. Better still it generated one of the best and most effective IT systems in healthcare and it is a system which delivers far greater benefits than anything the NPfIT has ever hoped for. The system was developed by the US Veterans Administration (responsible for the hospital care of military veterans in the USA). The system was built as a series of modules, skunk-works style and in the face of strong and sustained opposition by the IT leaders in the VA. But groups of IT savvy doctors knew they could make big improvements in their daily clinical work if they had systems that did a better job with patient information (one part was developed to prevent patients being given the wrong drugs, a serious problem for them at the time). The skunk works used open source techniques and designed in the ability of each small module to talk to other modules. Eventually they got a modular system that allows any VA doctor anywhere (in any GP office or any hospital) to access all the relevant information about one of their patients. The clinical benefits were instrumental in raising standards across the VA from some of the worst to, by some experts’ reckoning, the best in the US health system.


So disciplined pluralism works even in IT projects. Better still, the more complex the need, the better the benefit of this approach will be. Governments are unlikely to adopt it though, as they can’t admit how little they know about what the NHS needs or how to deliver it. And most lobbyists will just encourage this belief by claiming that we just need to consult more doctors or hire better project managers.