Who predicts the future best: seasoned domain experts, statistical models, or amateur forecasters? A research review by consultancy Arb finds that none of the three wins outright, but the rankings may still surprise you. Top “superforecasters” hold a modest but real edge of around 10% over domain experts. The famous claim of a 30% advantage largely evaporates under fair methodological scrutiny. Statistical models, meanwhile, only shine under narrow conditions: stable environments, clean data, and continuous trends. Most real-world problems fail on at least one of those fronts.
What makes forecasters genuinely useful goes beyond raw accuracy. Unlike experts, they are required to put explicit probabilities on their predictions, maintain public track records, and have no institutional skin in the game. That last point matters more than it sounds. Experts employed by governments or research funders may face subtle pressure to present rosier pictures than the data warrants. Anonymous forecasters on platforms like Metaculus don’t. During COVID-19, it was an anonymous Metaculus user (not a public health authority) who sounded one of the earliest alarms.
The practical takeaway for governments is straightforward: don’t pick one oracle and trust it blindly. The research consistently shows that aggregating diverse, independent viewpoints outperforms any single method. In practice, this means pairing domain experts with trained forecasters, feeding both with good statistical models, and building internal cultures where staff quantify their confidence rather than just stating opinions. Concretely, the authors recommend anonymous internal polls, public track record monitoring, and budgets for contracting external superforecasters on high-stakes decisions. The cost is low and the potential upside, better pandemic responses, smarter R&D bets, fewer geopolitical surprises, is enormous.
Discover more from Forecasting Strategy
Subscribe to get the latest posts sent to your email.