Pages

Wednesday, 14 August 2019

Good news: The AI won't kill you; Bad news: it won't solve your problems either.



AIs won't take over the world because they won't be able to work out how to do it. But they won't solve our big problems either. The hype machine behind current AI investment is founded on an unwarranted extrapolation from recent AI successes that doesn't apply to most real world problems. We need to learn where AI can work and where it can't or we will waste money on systems that can't possibly work. 

Scifi and a scientific speculation about Artificial Intelligence (AI) has ignored a very fundamental limitation in how AIs work. This limitation has surprising parallels with why some of the less pleasant political philosophies the world has seen are doomed to fail. But this limitation has been–as far as I can tell–largely missed by serious speculators and fiction writers.

And the failure to understand the limitations of AI has consequences. On one hand there is a fair amount of worry about the dangers. On the other there is a dangerous and naive belief in their ability to solve many difficult problems. If I am right, neither view is justified.

I started thinking about this while reading Tom Chivers' book The AI Does Not Hate You (a really good read that covers a lot of the background and a lot of thinking from the community of nerds who have devoted a lot of time worrying about the consequences of AI). But I was also reading some Karl Popper books I skipped as a student (The Poverty of Historicism and The Open Society and its Enemies). And–somewhat amazingly, since Popper wrote these before AI had ever been invented–it turned out his ideas had some current relevance.

Let me explain.

An introduction: AIs in fiction
There are plenty of SciFi stories involving malevolent AIs. The 1970 saw the movie Colossus: The Forbin Project (where The US basically creates SkyNet 27 years early to run nuclear defence only to find the computer able to outsmart the government and its creators). The Terminator added time travel to a similar scenario (there is nothing new under the sun). The Matrix added virtual reality simulations to a related idea. 

What happened in the worlds created by Isaac Asimov is revealing. His Robot stories created robots with positronic brains and acted at least as intelligently as people. His unconnected series The Foundation Trilogy didn't have any AIs but did have a human psychohistorian who had worked out the laws determining the future path of human history. Asimov's Hari Seldon used "calculators" to solve his equations. But the underlying idea was that, given enough data, calculations could be done that would predict the future of society and, when required, steer it to a better outcome. This idea is very important even though it didn't, when originally written, involve a superintelligent AI.

Later Asimov (very ill-advisedly) retconned The Foundation stories to make them part of the same universe as the Robot stories and make a powerful AI the driver for all the messing with human history. In this case the AI was benevolent. But the same assumptions apply in his optimistic world as they do in the dystopian stories where the AI wants to kill us all or where it accidentally kills us all in pursuit of the goal of making more paperclips.

The fear expressed in many of the stories of rogue AIs (mirrored by the optimism in Asimov) is that, once we have created computers that can process information faster than we can, we will lose control and they will be able to dominate us (in Asimov for beneficial ends; in many other stories, for perverse or malicious ends). 

Why the assumptions are wrong
The underlying assumption in both the optimistic stories and the dystopian ones is that people are both not entirely rational and have a finite capacity for reasoning that can easily be exceeded once a critical mass of computing power can be assembled.

There are two parts to this belief that we can be outdone by a sufficiently powerful computer. One is the belief that rational thought unencumbered with emotion will produce better solutions to problems than human thinking. The other is that sufficiently powerful computers can become powerful enough to outmaneuver people in managing the world (either leading to a utopian paradise or a dystopian slave state depending on the orientation and goals of the AI).

The idea that better solutions will emerge if we purge the world of emotion is attractive to many naive scientists. But it is easy to refute because of the way the human brain works. The brain is, to some extent, compartmentalised with different physical areas dealing with different roles. Vision is controlled by different parts of the brain then smell or hearing. And human emotions, not to simplify too much, can be partly localised in specific parts of the brain. When these are damaged we can see what sort of person emerges. The famous case of Phineas Gage is often quoted as the archetype of what a person is like when their emotions are removed (he suffered a very specific brain injury in an industrial accident). But the emotionless person turns out not to be a super rational problem solver freed from the misleading siren calls of conflicting emotional drives. The emotionless person that emerged after Phineas Gage's injury turned out to be a complete mess incapable of making basic decisions or getting anything done. He was heavily studied,as other people with similar injuries have been. He turned out more like Buridan's Ass than a super rational thinker. It turns out that emotions are a vital part of human thinking and decision making and not a burden that gets in the way of logic as the Vulcans would have us believe. (Though the details of Gage's recovery are often omitted and the case is more complex than often supposed; and Spock is not entirely emotionless–don't @ me). 

AI researcher David Gelertner wrote a whole book explaining that an AI can't hope to emulate human intelligence without incorporating emotions.

But the other part of the problem is more significant for understanding why AIs won't work the way SciFi worries about. 

Many thinkers have speculated is that, when AI gets to a certain threshold of power, its ability to learn grows exponentially and it rapidly outgrows its designers. Hence the worry about taking over the human world. And this possibility has been greatly enhanced by the recent success of AI in surpassing human players at Chess, Go and (apparently) even poker.

We have known that computers could beat us as some board games for a long time. Chequers was basically solved by computers decades ago. But analysts thought for a long time that more complex games would resist the ability of computers to conquer them. Chess is far more complex than chequers (with more possible games than atoms in the known universe–see this Numberphile video for an explanation of those estimates) and some thought that only humans could think well enough to play the game effectively. Then an IBM creation beat the best player in the world (though the original Deep Blue was not an AI, just a very powerful chess computer using expert algorithms and a lot of brute force computing power). Chess was hard because the search-space of possible moves was too large for any brute force searching to find the best possible moves regardless of computer power. Then, very recently, AlphaZero–a learning AI–was taught the rules of chess (but not chess-playing algorithms fine tuned by people to simplify the job of searching all the combinations of moves). And, in a short period, it became the strongest chess playing system in the world, just by playing itself and learning what worked. For a short time many predicted that Go would be resistant to that approach as it has another order of magnitude or two more combinatorial complexity than chess. But it, too, fell to Alpha Go, a similar AI trained with the rules of Go.

Interestingly in both cases it turns out that an AI has found new strategies that people have not thought of and plays the game in new ways that are very effective.

These triumphs were hailed as illustrating the coming AI singularity when a general AI will do the same exponential learning trick and take over the world. Go and Chess were conquered by an AI that improved at an exponential rate once it learned the basic rules of the games.

The reason why this was, to some, unexpected was because their thinking was based on the combinatorial complexity of the games. The barrier to computer progress was, they speculated, the fact that there are too many possible games to enumerate explicitly and therefore only some different sort of intelligence that could think strategically could win. This, some argued, was what distinguished human thinking.

This is bollocks.

There are only vague analogies between the real world and Chess or Go. While they have been thought to have similarities to warfare and can train people in some of the strategy of war, this is mostly untrue. Real warfare is not like playing Go: it is more like playing Go when you don't know how big the board is, have only a vague and often mistaken idea where your opponents pieces are, often don't know where your own pieces are, and  have to make decisions while being jabbed in the face with a broken wine bottle. 

The real distinction between the world of human society and the world of chess (but also between many finite problems in the world of people) is not about combinatorial complexity. The real world is, usually, a lot more complicated and reflexive than Chess or Go. It is more uncertain. It has rules that are less clear or that may not even exist. It has more than two players. And it has, sometimes, no clear way of working out who won or even whether winning has any meaning.

This isn't just extra combinatorial complexity. If Go or chess were played on a bigger board with more complex rules they would add more complexity and that complexity would grow very quickly. But they would be just as susceptible to an AI learning how to play them (it might need another generation of processor or bigger memory chips but the same method would surely eventually yield a successful computer player.)

Not in the real world and not for many typical human problems.

To see why consider how the all conquering game AIs have learned how to play their games of Go and Chess. The games have a finite set of rules that define a valid game or sequence of moves. AIs have an unambiguous way of calculating the score at the end of the game and no doubt about which side won the game. AIs can play games and observe the outcomes. Even if they start with random moves they can learn by observing the outcomes and, therefore, what patterns usually yield victory. They can improve their recognition of those winning patterns by incorporating them in future play and playing further games. They can create new games as fast as they can compute their moves and every game adds to their knowledge of which patterns yield more victories. Learning AIs are, at their heart, pattern recognition engines that can process vast amounts of data. The limitation of their learning in finite games is how fast they can generate more possible games to learn from. Hence, the more computer power the faster they learn. And once they start learning the growth in expertise is exponential until they have exhausted their storage and memory capacity.

But this sort of learning is only possible because the rules are finite and the outcomes certain. While some practical problems are like this (more later), most are not. In the real world both the rules and outcomes may be unclear.

Take, for example, the problem of extending the human life span. Maybe some combination of diet, physical exercise and intellectual challenge and genetic manipulation could extend the human lifespan to a median of 150 years. It isn't impossible. It doesn't violate the laws of biology or physics. What would a benevolent AI have to do to solve that problem? It could suggest a series of genetic interventions to extend the human lifespan. And it could, with some help from us, test those ideas. But it can't quickly learn what works because it has to wait 150 years to find out whether it achieved its goal. And even a full simulation of a functioning human body inside the AI doesn't solve this problem because that simulation can only be built correctly from actual observations about what happens in the real world. The AI can't learn faster than it can observe the outcome of experiments IRL. 

This is worth dwelling on not least because it helps distinguish which tasks might be soluble with a decent learning AI. Problems that are worth tackling will have unambiguous rules or inputs and unambiguous outcomes that can be checked against reality with little uncertainty or ambiguity. So finite games with clear rules are not a problem. Even games involving chance are in this domain even if they involve bluffing, which is why it seems that AIs can play poker. But poker has unambiguous rules and outcomes. The best bluffing strategy is, in principle, learnable even if it involves judging specific characteristics of the opponent (though a good strategy will win more often than a bad one even if played against unknown opponents). There can't be a "perfect" strategy with the amount of randomness involved, but good strategy can be learned.

Consider what sort of medical advances might fall to a learning computer. Some diagnostics (identifying breast cancer from breast scan images, for example) have very clear datasets and fairly clear known outcomes (not 100% certain outcomes, though, as we know from epidemiological analysis that we send far too many suspect cancers for surgery than we should). But we can certainly compare a computer driven classification algorithm for suspected cancers to the work of experienced radiologists and we can check their work with other clinical results. But most medicine is not like that. Consider trying to train an AI to do the job of a GP. The inputs are vague and ambiguous (interactions with patients are often rambling and even the clinical history of the patient may be full of errors). The outputs–a diagnosis and a treatment–are hard to test for correctness and may also, frequently, be wrong. Worse still we may lack any good way to check the diagnoses or treatments to test whether they are correct. So, even if we trained an AI to mimic an existing GP we might not be able to tell whether we had created a Harold Shipman or a Florence Nightingale (OK, she was a nurse and a statistician not a GP, but you get the point). 

In short, there is little hope of training effective AIs when the inputs are unconstrained, vague and variable; or where the outputs cannot be readily verified to be correct or even unambiguously good. We find this a tough problem even for human doctors.

And medicine is a narrow field of human activity. Life is much bigger and more variable. As is society, where individual lives interact in ways that create exponentially (in the proper mathematical sense) more unpredictable interactions. 

So how can AIs learn how to take over the world? There is no way for them to learn how to do so when there are no patterns to observe and learn from.

This touches on a philosophical debate that has had too little impact on either computer science or politics. Karl Popper called  it The Poverty of Historicism. His book was a powerful demolition of the idea that history has predictable, teleological patterns. In particular, he was determined to show that the political philosophies that rely on history flowing towards a specific destination were both wrong and extremely dangerous. Many more people agree with the second part of that than have bothered to understand the first: historicism isn't just dangerous, it is wrong.

He argues:
"My proof consists of showing that no scientific predictor–whether a human scientist or a calculating machine–can possibly predict by scientific methods, its own future results."

And, while I'm simplifying Popper's argument a lot, the basic idea holds. An AI cannot learn how to run the world by recognising patterns and outcomes because there are no consistent patterns to observe. Just as importantly, even if there were such patterns, an AI could not learn them quickly as the implications of an intervention in the world might take a whole human lifespan to become apparent. That's a pretty slow learning loop. 

To put it simply: the apparently magnificent achievements of current AI tools are based on dramatic improvements in the algorithms for pattern recognition. But they probably won't work at all and certainly won't show unconstrained exponential improvement when there are no patterns to observe.

So what?
The implications of this are not all pessimistic. There are problems where AI can give us better solutions than we currently have. Specific problems where there are clear patterns and clear outcomes may well fall to AI (eg interpreting breast scans to identify the early stages of cancer or eye scans to spot incipient eye disease before it gets too bad to fix). But these are a fairly small subset of problems in medicine or life. And many of the big problems in the human world have none of the characteristics that would enable an AI to recognise patterns and solve anything.

It would certainly help if we had a better idea about where AI investment should be directed. If we put effort into problems current AI techniques are likely to be able to solve, we could see some big benefits. But the hype train appears to be overwhelming our judgement. The recently announced NHS decision to spend £250m on AI appears to be driven by exactly the same naive optimism that has invested huge sums into AI research in the past. And failed. Every time. DARPA spent a lot in the 1950s & 1960s but gave up in despair in the 1970s. Japan had a huge related programme in the 1980s followed by the EU and the UK (with the Alvey programme). All these were written off as failures by the mid 1990s. All suffered from overambition and a failure to identify which problems could be tackled given the tools available. There is little sign that the current boom has learned anything from this history.

Overoptimism may be the most important risk. We trust outputs produced by computers even when they do not deserve our trust. As Meredith Broussard argues in Artificial Unintelligence:

"One recurrent idea in this book is that computers are good at some things and very bad at others, and social problems arise from situations in which people misjudge how suitable a computer is for performing the task."

Even when we apply big algorithms we understand (we often don't understand the details behind learning AIs) we suffer from this problem. We trawl big datasets seeking new patterns, for example on the relationship between diet and health. We find new patterns. We believe the new patterns because some clever new Big Data algorithm has found them. But the problem with big data is that many of the "patterns" are noise. The number of apparently significant statistical correlations in large data sets grows faster than the dataset and a lot faster than the true patterns. The vast majority of the "new" patterns we see turn out to be noise when properly tested (see Ioannidis' famous paper Why Most Published Research Findings are False). Or Broussard's point on why the even more hyped science of Big data won't help:

"Here’s an open secret of the big data world: all data is dirty. All of it. Data is made by people going around and counting things or made by sensors that are made by people. In every seemingly orderly column of numbers, there is noise. There is mess. There is incompleteness. This is life. The problem is, dirty data doesn’t compute."
If we automate the search for new patterns with clever computer algorithms or AI we dramatically inhibit our critical faculties in assessing the results. The clever computer said it, so it must be true. Even when the dataset the computer used would be thrown out as irredeemably corrupt by any self-respecting scientist.
The biggest risk from AI is not that the AI will try to take over the world. It is that we will lower our natural skepticism and we will trust what the computer says even when it does not deserve our trust. 
We are also at risk of wasting vast amounts seeking magic bullets to solve problems we could solve for less money using known techniques. Many medics in the NHS have pointed out that providing hospitals with computers that don't take 30mins to wake up in the morning and which don't require 10 separate logins for single clinic session might yield more good than £250m on speculative investment in AI. But the AI is headline friendly and gets the cash but boring improvements in basic IT are just not newsworthy and get no investment.
I've rambled on for far too long already. But my basic conclusion is this: AI can solve some (narrow) problems and, if we are going to spend money, that's where we should direct it. But we should inoculate ourselves against the hype: AI won't solve the big, human problems we have and we should not waste money on programmes that assume it will.
Oh, and the idea that a future AI could take over the world is nonsense.