The world is certainly not short of pundits claiming to have a grasp on where the economy is heading or what the future holds for Ukraine. But history reminds us how poor humans are at making predictions in complex situations. Could a fully automated algorithm beat the predictions of these pundits? Not yet. But history also has a way of vindicating the power of algorithms over human judgement.
Computer chess picture from Shutterstock
The most famous early example of this was the 1997 defeat of world chess champion Garry Kasparov by IBM's "Deep Blue". The decisive sixth game lasted only 19 moves, with the deciding factor being human error. By reversing his opening move order, Kasparov allowed Deep Blue to make a tactical sacrifice, resulting in the first time a computer defeated a human champion under classical time controls.
For many chess enthusiasts, the loss was a Copernican blow to the human ego. But for others, computers represented a novel tool to aid their mastery of the game, delivering high-level analysis in real time. Previously, grandmasters prepared for important games by looking over transcripts of older games and studying thick volumes of openings and endgames. But with computers, professional chess players could quickly brute-force billions of tactical combinations and devote more time focusing on deeper strategic questions.
A similar dynamic emerged in the world of insurance five years earlier. Before Hurricane Andrew devastated much of Florida in 1992, insurance underwriters typically estimated the size of future losses only by examining historical losses. After Andrew, it became clear that past data is sometimes a poor indicator of the future. Computational methods offered an alternative — rather than exclusively looking at past damage, catastrophes could be simulated in a digital world to predict losses.
The catastrophe models work in a similar manner to the chess algorithms. Instead of predicting millions of move combinations, the models predict millions of possible hurricane trajectories. This "Monte Carlo" approach can be used to examine other catastrophes as well — earthquakes, pandemics, floods, and even terrorist attacks. By looking at the hypothetical distribution of future catastrophes, insurance companies can set aside enough capital to confidently meet future claims.
The strength of these models comes not from their ability to replace underwriters, but rather their ability to complement human decision-making. After getting an initial estimate, human experts typically tweak the model to account for their unique experience, in the same way a regular might correct for the dents and creases in the local pub's billiards table (as Nate Silver suggests). The merits of humans and algorithms working together were not lost on Kasparov either. After the loss to Deep Blue, Kasparov himself helped popularise Centaur Chess, a variant where cyborg teams of one human and one computer compete against each other.
As Kasparov described at the Oxford Martin School, pairing a human's strategic insight with a computer's tactical prowess enables these teams to beat even the strongest chess supercomputers — a feat no human has been able to accomplish unaided since 2007. But it is uncertain whether this trend will hold. As technology develops exponentially, we are approaching a point where a human player might end up being more of a liability than an asset to a cyborg team.
The implications for insurance are even more uncertain. In areas with an abundance of data, such as auto or life insurance, there is a real possibility that algorithms may put humans in the back seat. But when it comes to estimating rare events, we're still a long way off from entirely replacing groups of human experts.
Figuring out how to maximise the benefits of computer models and integrate them into human systems will be a key question for the years to come. Only one pattern seems certain: algorithms are getting exceptionally good at forecasting the future, whether the domain is chess, weather, or even geo-political conundrums. Come 2025, we might hear about the next crisis before it occurs.
Andrew Snyder-Beattie is from the University of Oxford. He does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.