Old but important ...
It is the somewhat gratifying lesson of Philip Tetlock's new book,Expert Political Judgment: How Good Is It? How Can We Know?(Princeton; $35), that people who make prediction their business—people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables—are no better than the rest of us.
... Tetlock claims that the better known and more frequently quoted [experts] are, the less reliable their guesses about the future are likely to be. The accuracy of an expert's predictions actually has an inverse relationship to his or her self-confidence, renown, and, beyond a certain point, depth of knowledge. People who follow current events by reading the papers and newsmagazines regularly can guess what is likely to happen about as accurately as the specialists whom the papers quote.
He picked two hundred and eighty-four people who made their livingcommenting or offering advice on political and economic trends... By the end of the study, in 2003, the experts had made 82,361 forecasts.
Tetlock also found that specialists are not significantly more reliable than non-specialists in guessing what is going to happen in the region they study. Knowing a little might make someone a more reliable forecaster, but Tetlock found that knowing a lot can actually make a person less reliable.
And the more famous the forecaster the more overblown the forecasts.Experts in demand,Tetlock says,were more overconfident than their colleagues who eked out existences far from the limelight.
Expert Political Judgmentis just one of more than a hundred studies that have pitted experts against statistical or actuarial formulas, and in almost all of those studies the people either do no better than the formulas or do worse. In one study, college counsellors were given information about a group of high-school students and asked to predict their freshman grades in college. The counsellors had access to test scores, grades, the results of personality and vocational tests, and personal statements from the students, whom they were also permitted to interview. Predictions that were produced by a formula using just test scores and grades were more accurate. There are also many studies showing that expertise and experience do not make someone a better reader of the evidence. In one, data from a test used to diagnose brain damage were given to a group of clinical psychologists and their secretaries. The psychologists' diagnoses were no better than the secretaries'.
Tetlock's experts were also no different from the rest of us when it came to learning from their mistakes. Most people tend to dismiss new information that doesn't fit with what they already believe. Tetlock found that his experts used a double standard: they were much tougher in assessing the validity of information that undercut their theory than they were in crediting information that supported it. The same deficiency leads liberals to read only The Nation and conservatives to read only National Review.
Tetlock found that ... experts routinely misremembered the degree of probability they had assigned to an event after it came to pass. They claimed to have predicted what happened with a higher degree of certainty than, according to the record, they really did. When this was pointed out to them, by Tetlock's researchers, they sometimes became defensive.
Low scorers look like hedgehogs: thinkers whoknow one big thing,aggressively extend the explanatory reach of that one big thing into new domains, display bristly impatience with those whodo not get it,and express considerable confidence that they are already pretty proficient forecasters, at least in the long term. High scorers look like foxes: thinkers who know many small things (tricks of their trade), are skeptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexiblead hocerythat require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess.
Tetlock did not find, in his sample, any significant correlation between how experts think and what their politics are. His hedgehogs were liberal as well as conservative, and the same with his foxes. (Hedgehogs were, of course, more likely to be extreme politically, whether rightist or leftist.) He also did not find that his foxes scored higher because they were more cautious—that their appreciation of complexity made them less likely to offer firm predictions. Unlike hedgehogs, who actually performed worse in areas in which they specialized, foxes enjoyed a modest benefit from expertise. Hedgehogs routinely over-predicted: twenty per cent of the outcomes that hedgehogs claimed were impossible or nearly impossible came to pass, versus ten per cent for the foxes. More than thirty per cent of the outcomes that hedgehogs thought were sure or near-sure did not, against twenty per cent for foxes.The upside of being a hedgehog, though, is that when you're right you can be really and spectacularly right. Great scientists, for example, are often hedgehogs. They value parsimony, the simpler solution over the more complex. In world affairs, parsimony may be a liability—but, even there, there can be traps in the kind of highly integrative thinking that is characteristic of foxes.
Tetlock also has an unscientific point to make, which is thatwe as a society would be better off if participants in policy debates stated their beliefs in testable forms—that is, as probabilities—monitored their forecasting performance, and honored their reputational bets.He thinks that we're suffering from our primitive attraction to deterministic, overconfident hedgehogs. It's true that the only thing the electronic media like better than a hedgehog is two hedgehogs who don't agree. Tetlock notes, sadly, a point that Richard Posner has made about these kinds of public intellectuals, which is that most of them are dealing insolidaritygoods, notcredencegoods. Their analyses and predictions are tailored to make their ideological brethren feel good—more white swans for the white-swan camp. A prediction, in this context, is just an exclamation point added to an analysis.
For more, see Everybody's an Expert by , December 5, 2005 at The New Yorker.
No comments:
Post a Comment