This page has been migrated from an earlier version of this site. Links and images may be broken.
After the spectacular failure of financial experts everywhere to predict the 2008 crash the whole business of prediction has come under scrutiny. The consensus is that prediction is difficult, especially about the future, as everyone from Neils Bohr to Yogi Berra is supposed to have said, and so the questioning extends to the closely-related topic of how to act in the face of a future that we cannot foresee?
I’ve just read three books on these topics, by a Canadian, a Briton, and an American, and I’ll do a post on each. Today, it’s Dan Gardner’s Future Babble. Next up is Tim Harford’s Adapt, and I’ll finish with Duncan Watts’ Everything is Obvious Once You Know the Answer. Time-saver: on a scale of one to five, Gardner gets 2, Harford 1.5, and Watts 4.
So, Future Babble. It’s a straightforward journalistic book built on the work of psychologist Philip Tetlock, who also figures prominently in _Adapt_and makes an appearance in Everything is Obvious. Tetlock (home page) is famous for an extended experiment in which he assembled “284 experts - political scientists, economists, and journalists - whose jobs involve commenting on or giving advice on political or economic trends.”(p25) Several years and over 28,000 predictions later, he assessed their results and concluded that, on average, experts did only a little better than “a dart-throwing chimpanzee”, and by some measures no better at all.
Not all experts did equally badly though, and Tetlock was able to identify the traits that made for more and less successful punditry. Those who did particularly badly “were not comfortable with complexity and uncertainty [and] sought to ‘reduce the problem to some core theoretical scheme’… and they used that theme over and over, like a template, to stamp out predictions. These experts were also more confident than others that their predictions were accurate.” (26) Those who did well “drew information and ideas from multiple sources and sought to synthesize it. They were self-critical, always questioning whether what they believed to be true really was… Most of all, these experts were comfortable seeing the world as complex and uncertain - so comfortable that they tended to doubt the ability of anyone to predict the future.”
In other words, “The experts who were more accurate than others tended to be much less confident that they were right.” (27)
Tetlock calls his less-unsuccessful experts “foxes” (those who know many things) and the even-more-unsuccessful ones “hedgehogs” (those who know one big thing), after an essay by Isaiah Berlin.
Future Babble chronicles many failed prophets and their off-base predictions, and shows how hedgehogs hold on to their beliefs even in the light of their continued failure. His stories are weighted towards prophets of doom (Paul Ehrlich gets particularly harsh treatment, but Y2K, Peak Oil, Arnold Toynbee’s theory of history, the inexorable rise of Japan and many others get a mention) although some pollyannas are included too (Dow 36,000, for example). The impression I was left with is that Gardner sees unorthodox, cultist predictions as particularly likely to be false.
The bad news does not stop here, Gardner tells us. Not only are experts unsuccessful at prediction, and not only are “hedgehog” experts even worse than others, but the experts most in demand as TV pundits, keynote speakers, and corporate consultants are overwhelmingly those spiky, one big idea types. It is not reassuring that companies not entirely unlike the one I work for look to exactly this kind of expert to guide their strategy. Why do hedgehogs do well? As the book’s subtitle tells us, while Gardner spends much of his book exploring “why expert predictions fail” he does also explore “why we believe them anyway”, mainly in Chapter 6.
The roots of our love for hedgehogs despite their objectively bad rates of success are, he argues, psychological. Gardner leans heavily on the work of Kahnemann and Tversky on the psychology of decision-making and behavioural economics - priming, the availability heuristic and so on - material that has appeared many times in popular books over recent years, and backs this up with several other hedgehog-loving traits: our tendency to follow authority (Milgram yet again1), our love of “simple, clean, confident” messages delivered in easily-digestible story form, the media’s focus on successful predictions and its sieve-like memory for unsuccessful ones, and our own similar tendencies. To my mind, Gardner understates the social and political sources of demand for expert prediction in favour of the psychological. The spread of an idea depends on how easily it can diffuse through a network of people, and the psychology of people is only one factor that governs that diffusion. Some ideas are easy to communicate from one person to another, others difficult. Some messages inherently generate new connections (“communication is good for you!”) whereas others reshape networks so as to make spreading different (“silence is golden”).
What’s in the book is interesting, and entertaining enough. The problem is, Future Babble stops too soon and leaves many questions unanswered. Skewering the failed predictions of the past is, after all, an easy game. What we need to know is how to distinguish reliable predictions from unreliable ones, and how to proceed in the face of unpredictability.
The problems with prediction are chaotic systems driven by non-linearity, the unpredictability of people, and the fact that interactions among people often makes the future more, rather than less, tricky to predict. But not all predictions are hopeless; the weather in some parts of the world is unpredictable, but it’s easy to predict that Arizona in August will be hot and dry. It is difficult to foresee the future shape of our digital world, but Moore’s Law has been with us for five decades, and I Hereby Predict that the computer chips of the future will be smaller and faster than those of today. Chaotic systems are ubiquitous, but not everything is chaotic. Distinguishing one from the other would be helpful, and although prediction is the subject of the book, Gardner does little to spell out exactly what kind of predictions he is talking about. He focuses on big economic, ecological, and political predictions, but is not clear about how broad a net he is casting. And while he spends much of his time skewering hedgehogs, it seems to me that there were many foxes who did not see the financial crisis coming as well.
And there is a contradiction at the heart of the book. Dan Gardner has written a simple, clean and confident argument to warn us against simple, clean and confident arguments. He tells us stories to warn us of the dangers of placing too much faith in stories. He gives us a book with one big idea (“Why Expert Predictions Fail and Why We Believe Them Anyway”), which is that big ideas are the most likely to be wrong. Perhaps the book needs to be written this way - part of his message, after all, is that everyone loves hedgehogs - but surely the contradictions deserve to be addressed?
While some predictions are disinterested forecasts of the future, many are made because we want to take actions that affect that future, and want to choose the right action: if we do this, then we can bring about that. Gardner quotes Kenneth Arrow at the beginning of his final chapter: “The Commanding General is well aware the forecasts are no good. However, he needs them for planning purposes.”(p237) But he does not really engage with the question of how to make decisions in the absence of reliable forecasts beyond general exhortations to caution and humility. These are fine so far as they go, but they do not take us very far. What are we supposed to do about the continued success of unreliable “hedgehogs”?
Take my job. I work for a software company as something called a product manager, and one of the things product managers do is argue for new products, new product directions and new features. So I’m trying to influence decisions the company makes and using predictions to do it, while others are arguing for alternative courses. And many of those predictions, mine and theirs, are Future Babble. So what do I do? Do I take the tactical route of arguing over-confidently, hedgehog like, for my position - essentially lying about my confidence in my predictions? Or do I act like a fox and look on as the decisions inevitably follow the suggestions of more charismatic presenters? If what I want is for my ideas to be taken on board, then I guess I should put my scruples to one side and become a hedgehog. But that’s not really what I want: what I really want is for the right decision to be taken, and sometimes that’s not going to be the one I am arguing for. I was hoping for inspiration and insight into these concrete issues around my daily work, but found nothing.
I enjoyed much of Future Babble, but in the end found it too limited to warrant recommendation. So I went on to the next book - Tim Harford’s_Adapt_ - in search of answers. Next post, I’ll tell you whether I found them.
1 Is it just me, or is the standard interpretation of the classic Milgram electric shock experiment all wrong? Participants followed instructions to administer “shocks”, judging that the authority figure in the white coat would not tell them to do something harmful without good reason, even though it looked like the subject was feeling pain. And… the participants were 100% right to trust their judgement. The subject was not feeling pain, and the authority figure was not instructing the participant to do something terrible. Why is this anything other than a story of good judgement on the part of the participants?